879 resultados para Safety critical applications
Resumo:
Los sistemas empotrados son cada día más comunes y complejos, de modo que encontrar procesos seguros, eficaces y baratos de desarrollo software dirigidos específicamente a esta clase de sistemas es más necesario que nunca. A diferencia de lo que ocurría hasta hace poco, en la actualidad los avances tecnológicos en el campo de los microprocesadores de los últimos tiempos permiten el desarrollo de equipos con prestaciones más que suficientes para ejecutar varios sistemas software en una única máquina. Además, hay sistemas empotrados con requisitos de seguridad (safety) de cuyo correcto funcionamiento depende la vida de muchas personas y/o grandes inversiones económicas. Estos sistemas software se diseñan e implementan de acuerdo con unos estándares de desarrollo software muy estrictos y exigentes. En algunos casos puede ser necesaria también la certificación del software. Para estos casos, los sistemas con criticidades mixtas pueden ser una alternativa muy valiosa. En esta clase de sistemas, aplicaciones con diferentes niveles de criticidad se ejecutan en el mismo computador. Sin embargo, a menudo es necesario certificar el sistema entero con el nivel de criticidad de la aplicación más crítica, lo que hace que los costes se disparen. La virtualización se ha postulado como una tecnología muy interesante para contener esos costes. Esta tecnología permite que un conjunto de máquinas virtuales o particiones ejecuten las aplicaciones con unos niveles de aislamiento tanto temporal como espacial muy altos. Esto, a su vez, permite que cada partición pueda ser certificada independientemente. Para el desarrollo de sistemas particionados con criticidades mixtas se necesita actualizar los modelos de desarrollo software tradicionales, pues estos no cubren ni las nuevas actividades ni los nuevos roles que se requieren en el desarrollo de estos sistemas. Por ejemplo, el integrador del sistema debe definir las particiones o el desarrollador de aplicaciones debe tener en cuenta las características de la partición donde su aplicación va a ejecutar. Tradicionalmente, en el desarrollo de sistemas empotrados, el modelo en V ha tenido una especial relevancia. Por ello, este modelo ha sido adaptado para tener en cuenta escenarios tales como el desarrollo en paralelo de aplicaciones o la incorporación de una nueva partición a un sistema ya existente. El objetivo de esta tesis doctoral es mejorar la tecnología actual de desarrollo de sistemas particionados con criticidades mixtas. Para ello, se ha diseñado e implementado un entorno dirigido específicamente a facilitar y mejorar los procesos de desarrollo de esta clase de sistemas. En concreto, se ha creado un algoritmo que genera el particionado del sistema automáticamente. En el entorno de desarrollo propuesto, se han integrado todas las actividades necesarias para desarrollo de un sistema particionado, incluidos los nuevos roles y actividades mencionados anteriormente. Además, el diseño del entorno de desarrollo se ha basado en la ingeniería guiada por modelos (Model-Driven Engineering), la cual promueve el uso de los modelos como elementos fundamentales en el proceso de desarrollo. Así pues, se proporcionan las herramientas necesarias para modelar y particionar el sistema, así como para validar los resultados y generar los artefactos necesarios para el compilado, construcción y despliegue del mismo. Además, en el diseño del entorno de desarrollo, la extensión e integración del mismo con herramientas de validación ha sido un factor clave. En concreto, se pueden incorporar al entorno de desarrollo nuevos requisitos no-funcionales, la generación de nuevos artefactos tales como documentación o diferentes lenguajes de programación, etc. Una parte clave del entorno de desarrollo es el algoritmo de particionado. Este algoritmo se ha diseñado para ser independiente de los requisitos de las aplicaciones así como para permitir al integrador del sistema implementar nuevos requisitos del sistema. Para lograr esta independencia, se han definido las restricciones al particionado. El algoritmo garantiza que dichas restricciones se cumplirán en el sistema particionado que resulte de su ejecución. Las restricciones al particionado se han diseñado con una capacidad expresiva suficiente para que, con un pequeño grupo de ellas, se puedan expresar la mayor parte de los requisitos no-funcionales más comunes. Las restricciones pueden ser definidas manualmente por el integrador del sistema o bien pueden ser generadas automáticamente por una herramienta a partir de los requisitos funcionales y no-funcionales de una aplicación. El algoritmo de particionado toma como entradas los modelos y las restricciones al particionado del sistema. Tras la ejecución y como resultado, se genera un modelo de despliegue en el que se definen las particiones que son necesarias para el particionado del sistema. A su vez, cada partición define qué aplicaciones deben ejecutar en ella así como los recursos que necesita la partición para ejecutar correctamente. El problema del particionado y las restricciones al particionado se modelan matemáticamente a través de grafos coloreados. En dichos grafos, un coloreado propio de los vértices representa un particionado del sistema correcto. El algoritmo se ha diseñado también para que, si es necesario, sea posible obtener particionados alternativos al inicialmente propuesto. El entorno de desarrollo, incluyendo el algoritmo de particionado, se ha probado con éxito en dos casos de uso industriales: el satélite UPMSat-2 y un demostrador del sistema de control de una turbina eólica. Además, el algoritmo se ha validado mediante la ejecución de numerosos escenarios sintéticos, incluyendo algunos muy complejos, de más de 500 aplicaciones. ABSTRACT The importance of embedded software is growing as it is required for a large number of systems. Devising cheap, efficient and reliable development processes for embedded systems is thus a notable challenge nowadays. Computer processing power is continuously increasing, and as a result, it is currently possible to integrate complex systems in a single processor, which was not feasible a few years ago.Embedded systems may have safety critical requirements. Its failure may result in personal or substantial economical loss. The development of these systems requires stringent development processes that are usually defined by suitable standards. In some cases their certification is also necessary. This scenario fosters the use of mixed-criticality systems in which applications of different criticality levels must coexist in a single system. In these cases, it is usually necessary to certify the whole system, including non-critical applications, which is costly. Virtualization emerges as an enabling technology used for dealing with this problem. The system is structured as a set of partitions, or virtual machines, that can be executed with temporal and spatial isolation. In this way, applications can be developed and certified independently. The development of MCPS (Mixed-Criticality Partitioned Systems) requires additional roles and activities that traditional systems do not require. The system integrator has to define system partitions. Application development has to consider the characteristics of the partition to which it is allocated. In addition, traditional software process models have to be adapted to this scenario. The V-model is commonly used in embedded systems development. It can be adapted to the development of MCPS by enabling the parallel development of applications or adding an additional partition to an existing system. The objective of this PhD is to improve the available technology for MCPS development by providing a framework tailored to the development of this type of system and by defining a flexible and efficient algorithm for automatically generating system partitionings. The goal of the framework is to integrate all the activities required for developing MCPS and to support the different roles involved in this process. The framework is based on MDE (Model-Driven Engineering), which emphasizes the use of models in the development process. The framework provides basic means for modeling the system, generating system partitions, validating the system and generating final artifacts. The framework has been designed to facilitate its extension and the integration of external validation tools. In particular, it can be extended by adding support for additional non-functional requirements and support for final artifacts, such as new programming languages or additional documentation. The framework includes a novel partitioning algorithm. It has been designed to be independent of the types of applications requirements and also to enable the system integrator to tailor the partitioning to the specific requirements of a system. This independence is achieved by defining partitioning constraints that must be met by the resulting partitioning. They have sufficient expressive capacity to state the most common constraints and can be defined manually by the system integrator or generated automatically based on functional and non-functional requirements of the applications. The partitioning algorithm uses system models and partitioning constraints as its inputs. It generates a deployment model that is composed by a set of partitions. Each partition is in turn composed of a set of allocated applications and assigned resources. The partitioning problem, including applications and constraints, is modeled as a colored graph. A valid partitioning is a proper vertex coloring. A specially designed algorithm generates this coloring and is able to provide alternative partitions if required. The framework, including the partitioning algorithm, has been successfully used in the development of two industrial use cases: the UPMSat-2 satellite and the control system of a wind-power turbine. The partitioning algorithm has been successfully validated by using a large number of synthetic loads, including complex scenarios with more that 500 applications.
Resumo:
Processor emulators are a software tool for allowing legacy computer programs to be executed on a modern processor. In the past emulators have been used in trivial applications such as maintenance of video games. Now, however, processor emulation is being applied to safety-critical control systems, including military avionics. These applications demand utmost guarantees of correctness, but no verification techniques exist for proving that an emulated system preserves the original system’s functional and timing properties. Here we show how this can be done by combining concepts previously used for reasoning about real-time program compilation, coupled with an understanding of the new and old software architectures. In particular, we show how both the old and new systems can be given a common semantics, thus allowing their behaviours to be compared directly.
Resumo:
Real-time software systems are rarely developed once and left to run. They are subject to changes of requirements as the applications they support expand, and they commonly outlive the platforms they were designed to run on. A successful real-time system is duplicated and adapted to a variety of applications - it becomes a product line. Current methods for real-time software development are commonly based on low-level programming languages and involve considerable duplication of effort when a similar system is to be developed or the hardware platform changes. To provide more dependable, flexible and maintainable real-time systems at a lower cost what is needed is a platform-independent approach to real-time systems development. The development process is composed of two phases: a platform-independent phase, that defines the desired system behaviour and develops a platform-independent design and implementation, and a platform-dependent phase that maps the implementation onto the target platform. The last phase should be highly automated. For critical systems, assessing dependability is crucial. The partitioning into platform dependent and independent phases has to support verification of system properties through both phases.
Resumo:
In component-based software engineering programs are constructed from pre-defined software library modules. However, if the library's subroutines do not exactly match the programmer's requirements, the subroutines' code must be adapted accordingly. For this process to be acceptable in safety or mission-critical applications, where all code must be proven correct, it must be possible to verify the correctness of the adaptations themselves. In this paper we show how refinement theory can be used to model typical adaptation steps and to define the conditions that must be proven to verify that a library subroutine has been adapted correctly.
Resumo:
Users of safety-critical systems are expected to effectively control or monitor complex systems, with errors potentially leading to catastrophe. For such systems, safety is of paramount importance and must be designed into the human-machine interface. While many case studies show how inadequate design practice led to poor safety and usability, concrete guidance on good design practices is scarce. The paper argues that the pattern language paradigm, widely used in the software design community, is a suitable means of documenting appropriate design strategies. We discuss how typical usability-related properties (e.g., flexibility) need some adjustment to be used for assessing safety-critical systems, and document a pattern language, based on corresponding "safety-usability" principles
Resumo:
In developing neural network techniques for real world applications it is still very rare to see estimates of confidence placed on the neural network predictions. This is a major deficiency, especially in safety-critical systems. In this paper we explore three distinct methods of producing point-wise confidence intervals using neural networks. We compare and contrast Bayesian, Gaussian Process and Predictive error bars evaluated on real data. The problem domain is concerned with the calibration of a real automotive engine management system for both air-fuel ratio determination and on-line ignition timing. This problem requires real-time control and is a good candidate for exploring the use of confidence predictions due to its safety-critical nature.
Resumo:
There is an increasing emphasis on the use of software to control safety critical plants for a wide area of applications. The importance of ensuring the correct operation of such potentially hazardous systems points to an emphasis on the verification of the system relative to a suitably secure specification. However, the process of verification is often made more complex by the concurrency and real-time considerations which are inherent in many applications. A response to this is the use of formal methods for the specification and verification of safety critical control systems. These provide a mathematical representation of a system which permits reasoning about its properties. This thesis investigates the use of the formal method Communicating Sequential Processes (CSP) for the verification of a safety critical control application. CSP is a discrete event based process algebra which has a compositional axiomatic semantics that supports verification by formal proof. The application is an industrial case study which concerns the concurrent control of a real-time high speed mechanism. It is seen from the case study that the axiomatic verification method employed is complex. It requires the user to have a relatively comprehensive understanding of the nature of the proof system and the application. By making a series of observations the thesis notes that CSP possesses the scope to support a more procedural approach to verification in the form of testing. This thesis investigates the technique of testing and proposes the method of Ideal Test Sets. By exploiting the underlying structure of the CSP semantic model it is shown that for certain processes and specifications the obligation of verification can be reduced to that of testing the specification over a finite subset of the behaviours of the process.
Resumo:
Human Resource (HR) systems and practices generally referred to as High Performance Work Practices (HPWPs), (Huselid, 1995) (sometimes termed High Commitment Work Practices or High Involvement Work Practices) have attracted much research attention in past decades. Although many conceptualizations of the construct have been proposed, there is general agreement that HPWPs encompass a bundle or set of HR practices including sophisticated staffing, intensive training and development, incentive-based compensation, performance management, initiatives aimed at increasing employee participation and involvement, job safety and security, and work design (e.g. Pfeffer, 1998). It is argued that these practices either directly and indirectly influence the extent to which employees’ knowledge, skills, abilities, and other characteristics are utilized in the organization. Research spanning nearly 20 years has provided considerable empirical evidence for relationships between HPWPs and various measures of performance including increased productivity, improved customer service, and reduced turnover (e.g. Guthrie, 2001; Belt & Giles, 2009). With the exception of a few papers (e.g., Laursen &Foss, 2003), this literature appears to lack focus on how HPWPs influence or foster more innovative-related attitudes and behaviours, extra role behaviors, and performance. This situation exists despite the vast evidence demonstrating the importance of innovation, proactivity, and creativity in its various forms to individual, group, and organizational performance outcomes. Several pertinent issues arise when considering HPWPs and their relationship to innovation and performance outcomes. At a broad level is the issue of which HPWPs are related to which innovation-related variables. Another issue not well identified in research relates to employees’ perceptions of HPWPs: does an employee actually perceive the HPWP –outcomes relationship? No matter how well HPWPs are designed, if they are not perceived and experienced by employees to be effective or worthwhile then their likely success in achieving positive outcomes is limited. At another level, research needs to consider the mechanisms through which HPWPs influence –innovation and performance. The research question here relates to what possible mediating variables are important to the success or failure of HPWPs in impacting innovative behaviours and attitudes and what are the potential process considerations? These questions call for theory refinement and the development of more comprehensive models of the HPWP-innovation/performance relationship that include intermediate linkages and boundary conditions (Ferris, Hochwarter, Buckley, Harrell-Cook, & Frink, 1999). While there are many calls for this type of research to be made a high priority, to date, researchers have made few inroads into answering these questions. This symposium brings together researchers from Australia, Europe, Asia and Africa to examine these various questions relating to the HPWP-innovation-performance relationship. Each paper discusses a HPWP and potential variables that can facilitate or hinder the effects of these practices on innovation- and performance- related outcomes. The first paper by Johnston and Becker explores the HPWPs in relation to work design in a disaster response organization that shifts quickly from business as usual to rapid response. The researchers examine how the enactment of the organizational response is devolved to groups and individuals. Moreover, they assess motivational characteristics that exist in dual work designs (normal operations and periods of disaster activation) and the implications for innovation. The second paper by Jørgensen reports the results of an investigation into training and development practices and innovative work behaviors (IWBs) in Danish organizations. Research on how to design and implement training and development initiatives to support IWBs and innovation in general is surprisingly scant and often vague. This research investigates the mechanisms by which training and development initiatives influence employee behaviors associated with innovation, and provides insights into how training and development can be used effectively by firms to attract and retain valuable human capital in knowledge-intensive firms. The next two papers in this symposium consider the role of employee perceptions of HPWPs and their relationships to innovation-related variables and performance. First, Bish and Newton examine perceptions of the characteristics and awareness of occupational health and safety (OHS) practices and their relationship to individual level adaptability and proactivity in an Australian public service organization. The authors explore the role of perceived supportive and visionary leadership and its impact on the OHS policy-adaptability/proactivity relationship. The study highlights the positive main effects of awareness and characteristics of OHS polices, and supportive and visionary leadership on individual adaptability and proactivity. It also highlights the important moderating effects of leadership in the OHS policy-adaptability/proactivity relationship. Okhawere and Davis present a conceptual model developed for a Nigerian study in the safety-critical oil and gas industry that takes a multi-level approach to the HPWP-safety relationship. Adopting a social exchange perspective, they propose that at the organizational level, organizational climate for safety mediates the relationship between enacted HPWS’s and organizational safety performance (prescribed and extra role performance). At the individual level, the experience of HPWP impacts on individual behaviors and attitudes in organizations, here operationalized as safety knowledge, skills and motivation, and these influence individual safety performance. However these latter relationships are moderated by organizational climate for safety. A positive organizational climate for safety strengthens the relationship between individual safety behaviors and attitudes and individual-level safety performance, therefore suggesting a cross-level boundary condition. The model includes both safety performance (behaviors) and organizational level safety outcomes, operationalized as accidents, injuries, and fatalities. The final paper of this symposium by Zhang and Liu explores leader development and relationship between transformational leadership and employee creativity and innovation in China. The authors further develop a model that incorporates the effects of extrinsic motivation (pay for performance: PFP) and employee collectivism in the leader-employee creativity relationship. The papers’ contributions include the incorporation of a PFP effect on creativity as moderator, rather than predictor in most studies; the exploration of the PFP effect from both fairness and strength perspectives; the advancement of knowledge on the impact of collectivism on the leader- employee creativity link. Last, this is the first study to examine three-way interactional effects among leader-member exchange (LMX), PFP and collectivism, thus, enriches our understanding of promoting employee creativity. In conclusion, this symposium draws upon the findings of four empirical studies and one conceptual study to provide an insight into understanding how different variables facilitate or potentially hinder the influence various HPWPs on innovation and performance. We will propose a number of questions for further consideration and discussion. The symposium will address the Conference Theme of ‘Capitalism in Question' by highlighting how HPWPs can promote financial health and performance of organizations while maintaining a high level of regard and respect for employees and organizational stakeholders. Furthermore, the focus on different countries and cultures explores the overall research question in relation to different modes or stages of development of capitalism.
Resumo:
Currently, there is increasing use of nanomaterials in the food industry thanks to the many advantages offered and make the products that contain them more competitive in the market. Their physicochemical properties often differ from those of bulk materials, which require specialized risk assessment. This should cover the risks to the health of workers and consumers as well as possible environmental risks. The risk assessment methods must go updating due to more widespread use of nanomaterials, especially now that are making their way down to consumer products. Today there is no specific legislation for nanomaterials, but there are several european dispositions and regulations that include them. This review gives an overview of the risk assessment and the existing current legislation regarding the use of nanotechnology in the food industry.
Resumo:
Purpose – This paper aims to contribute towards understanding how safety knowledge can be elicited from railway experts for the purposes of supporting effective decision-making. Design/methodology/approach – A consortium of safety experts from across the British railway industry is formed. Collaborative modelling of the knowledge domain is used as an approach to the elicitation of safety knowledge from experts. From this, a series of knowledge models is derived to inform decision-making. This is achieved by using Bayesian networks as a knowledge modelling scheme, underpinning a Safety Prognosis tool to serve meaningful prognostics information and visualise such information to predict safety violations. Findings – Collaborative modelling of safety-critical knowledge is a valid approach to knowledge elicitation and its sharing across the railway industry. This approach overcomes some of the key limitations of existing approaches to knowledge elicitation. Such models become an effective tool for prediction of safety cases by using railway data. This is demonstrated using passenger–train interaction safety data. Practical implications – This study contributes to practice in two main directions: by documenting an effective approach to knowledge elicitation and knowledge sharing, while also helping the transport industry to understand safety. Social implications – By supporting the railway industry in their efforts to understand safety, this research has the potential to benefit railway passengers, staff and communities in general, which is a priority for the transport sector. Originality/value – This research applies a knowledge elicitation approach to understanding safety based on collaborative modelling, which is a novel approach in the context of transport.
Resumo:
Automated airborne collision-detection systems are a key enabling technology for facilitat- ing the integration of unmanned aerial vehicles (UAVs) into the national airspace. These safety-critical systems must be sensitive enough to provide timely warnings of genuine air- borne collision threats, but not so sensitive as to cause excessive false-alarms. Hence, an accurate characterisation of detection and false alarm sensitivity is essential for understand- ing performance trade-offs, and system designers can exploit this characterisation to help achieve a desired balance in system performance. In this paper we experimentally evaluate a sky-region, image based, aircraft collision detection system that is based on morphologi- cal and temporal processing techniques. (Note that the examined detection approaches are not suitable for the detection of potential collision threats against a ground clutter back- ground). A novel collection methodology for collecting realistic airborne collision-course target footage in both head-on and tail-chase engagement geometries is described. Under (hazy) blue sky conditions, our proposed system achieved detection ranges greater than 1540m in 3 flight test cases with no false alarm events in 14.14 hours of non-target data (under cloudy conditions, the system achieved detection ranges greater than 1170m in 4 flight test cases with no false alarm events in 6.63 hours of non-target data). Importantly, this paper is the first documented presentation of detection range versus false alarm curves generated from airborne target and non-target image data.
Resumo:
Reliable communications is one of the major concerns in wireless sensor networks (WSNs). Multipath routing is an effective way to improve communication reliability in WSNs. However, most of existing multipath routing protocols for sensor networks are reactive and require dynamic route discovery. If there are many sensor nodes from a source to a destination, the route discovery process will create a long end-to-end transmission delay, which causes difficulties in some time-critical applications. To overcome this difficulty, the efficient route update and maintenance processes are proposed in this paper. It aims to limit the amount of routing overhead with two-tier routing architecture and introduce the combination of piggyback and trigger update to replace the periodic update process, which is the main source of unnecessary routing overhead. Simulations are carried out to demonstrate the effectiveness of the proposed processes in improvement of total amount of routing overhead over existing popular routing protocols.
Resumo:
Railroad corridors contain large number of Insulated Rail Joints (IRJs) that act as safety critical elements in the circuitries of the signaling and broken rail identification systems. IRJs are regarded as sources of excitation for the passage of loaded wheels leading to high impact forces; these forces in turn cause dips, cross levels and twists to the railroad geometry in close proximity to the sections containing the IRJs in addition to the local damages to the railhead of the IRJs. Therefore, a systematic monitoring of the IRJs in railroad is prudent to mitigate potential risk of their sudden failure (e.g., broken tie plates) under the traffic. This paper presents a simple method of periodic recording of images using time-lapse photography and total station surveying measurements to understand the ongoing deterioration of the IRJs and their surroundings. Over a 500 day period, data were collected to examine the trends in narrowing of the joint gap due to plastic deformation the railhead edges and the dips, cross levels and twists caused to the railroad geometry due to the settlement of ties (sleepers) around the IRJs. The results reflect that the average progressive settlement beneath the IRJs is larger than that under the continuously welded rail, which leads to excessive deviation of railroad profile, cross levels and twists.
Resumo:
Crash statistics that include the blood alcohol concentration (BAC) of vehicle operators reveal that crash involved motorcyclists are over represented at low BACs (e.g., ≤0.05%). This riding simulator study compared riding performance and hazard response under three low dose alcohol conditions (sober, 0.02% BAC, 0.05% BAC). Forty participants (20 novice, 20 experienced) completed simulated rides in urban and rural scenarios while responding to a safety-critical peripheral detection task (PDT). Results showed a significant increase in the standard deviation of lateral position in the urban scenario and PDT reaction time in the rural scenario under 0.05% BAC compared with zero alcohol. Participants were most likely to collide with an unexpected pedestrian in the urban scenario at 0.02% BAC, with novice participants at a greater relative risk than experienced riders. Novices chose to ride faster than experienced participants in the rural scenario regardless of BAC. Not all results were significant, emphasising the complex situation of the effects of low dose BAC on riding performance, which needs further research. The results of this simulator study provide some support for a legal BAC for motorcyclists below 0.05%.
Resumo:
The insulated rail joint (IRJ) is an essential component in a track circuit that controls the signaling system. Failure of IRJs leads to improper functioning of the signals,with potential for catastrophic results. Therefore, IRJs are regarded as safety-critical sections of rail network; hence, all of their components must be maintained in pristine design condition.