21 resultados para Fault prediction
em Instituto Politécnico do Porto, Portugal
Resumo:
Geostatistics has been successfully used to analyze and characterize the spatial variability of environmental properties. Besides giving estimated values at unsampled locations, it provides a measure of the accuracy of the estimate, which is a significant advantage over traditional methods used to assess pollution. In this work universal block kriging is novelty used to model and map the spatial distribution of salinity measurements gathered by an Autonomous Underwater Vehicle in a sea outfall monitoring campaign, with the aim of distinguishing the effluent plume from the receiving waters, characterizing its spatial variability in the vicinity of the discharge and estimating dilution. The results demonstrate that geostatistical methodology can provide good estimates of the dispersion of effluents that are very valuable in assessing the environmental impact and managing sea outfalls. Moreover, since accurate measurements of the plume’s dilution are rare, these studies might be very helpful in the future to validate dispersion models.
Resumo:
This paper presents an architecture (Multi-μ) being implemented to study and develop software based fault tolerant mechanisms for Real-Time Systems, using the Ada language (Ada 95) and Commercial Off-The-Shelf (COTS) components. Several issues regarding fault tolerance are presented and mechanisms to achieve fault tolerance by software active replication in Ada 95 are discussed. The Multi-μ architecture, based on a specifically proposed Fault Tolerance Manager (FTManager), is then described. Finally, some considerations are made about the work being done and essential future developments.
Resumo:
Adhesive joints are largely employed nowadays as a fast and effective joining process. The respective techniques for strength prediction have also improved over the years. Cohesive Zone Models (CZM’s) coupled to Finite Element Method (FEM) analyses surpass the limitations of stress and fracture criteria and allow modelling damage. CZM’s require the energy release rates in tension (Gn) and shear (Gs) and respective fracture energies in tension (Gnc) and shear (Gsc). Additionally, the cohesive strengths (tn0 for tension and ts0 for shear) must also be defined. In this work, the influence of the CZM parameters of a triangular CZM used to model a thin adhesive layer is studied, to estimate their effect on the predictions. Some conclusions were drawn for the accuracy of the simulation results by variations of each one of these parameters.
Resumo:
Despite the fact that their physical properties make them an attractive family of materials, composites machining can cause several damage modes such as delamination, fibre pull-out, thermal degradation, and others. Minimization of axial thrust force during drilling reduces the probability of delamination onset, as it has been demonstrated by analytical models based on linear elastic fracture mechanics (LEFM). A finite element model considering solid elements of the ABAQUS® software library and interface elements including a cohesive damage model was developed in order to simulate thrust forces and delamination onset during drilling. Thrust force results for delamination onset are compared with existing analytical models.
Resumo:
This work reports on the experimental and numerical study of the bending behaviour of two-dimensional adhesively-bonded scarf repairs of carbon-epoxy laminates, bonded with the ductile adhesive Araldite 2015®. Scarf angles varying from 2 to 45º were tested. The experimental work performed was used to validate a numerical Finite Element analysis using ABAQUS® and a methodology developed by the authors to predict the strength of bonded assemblies. This methodology consists on replacing the adhesive layer by cohesive elements, including mixed-mode criteria to deal with the mixed-mode behaviour usually observed in structures. Trapezoidal laws in pure modes I and II were used to account for the ductility of the adhesive used. The cohesive laws in pure modes I and II were determined with Double Cantilever Beam and End-Notched Flexure tests, respectively, using an inverse method. Since in the experiments interlaminar and transverse intralaminar failures of the carbon-epoxy components also occurred in some regions, cohesive laws to simulate these failure modes were also obtained experimentally with a similar procedure. A good correlation with the experiments was found on the elastic stiffness, maximum load and failure mode of the repairs, showing that this methodology simulates accurately the mechanical behaviour of bonded assemblies.
Resumo:
Polyolefins are especially difficult to bond due to their non-polar, non-porous and chemically inert surfaces. Acrylic adhesives used in industry are particularly suited to bond these materials, including many grades of polypropylene (PP) and polyethylene (PE), without special surface preparation. In this work, the tensile strength of single-lap PE and mixed joints bonded with an acrylic adhesive was investigated. The mixed joints included PE with aluminium (AL) or carbon fibre reinforced plastic (CFRP) substrates. The PE substrates were only cleaned with isopropanol, which assured cohesive failures. For the PE CFRP joints, three different surfaces preparations were employed for the CFRP substrates: cleaning with acetone, abrasion with 100 grit sand paper and peel-ply finishing. In the PE AL joints, the AL bonding surfaces were prepared by the following methods: cleaning with acetone, abrasion with 180 and 320 grit sand papers, grit blasting and chemical etching with chromic acid. After abrasion of the CFRP and AL substrates, the surfaces were always cleaned with acetone. The tensile strengths were compared with numerical results from ABAQUS® and a mixed mode (I+II) cohesive damage model. A good agreement was found between the experimental and numerical results, except for the PE AL joints, since the AL surface treatments were not found to be effective.
Resumo:
On-chip debug (OCD) features are frequently available in modern microprocessors. Their contribution to shorten the time-to-market justifies the industry investment in this area, where a number of competing or complementary proposals are available or under development, e.g. NEXUS, CJTAG, IJTAG. The controllability and observability features provided by OCD infrastructures provide a valuable toolbox that can be used well beyond the debugging arena, improving the return on investment rate by diluting its cost across a wider spectrum of application areas. This paper discusses the use of OCD features for validating fault tolerant architectures, and in particular the efficiency of various fault injection methods provided by enhanced OCD infrastructures. The reference data for our comparative study was captured on a workbench comprising the 32-bit Freescale MPC-565 microprocessor, an iSYSTEM IC3000 debugger (iTracePro version) and the Winidea 2005 debugging package. All enhanced OCD infrastructures were implemented in VHDL and the results were obtained by simulation within the same fault injection environment. The focus of this paper is on the comparative analysis of the experimental results obtained for various OCD configurations and debugging scenarios.
Resumo:
The structural integrity of multi-component structures is usually determined by the strength and durability of their unions. Adhesive bonding is often chosen over welding, riveting and bolting, due to the reduction of stress concentrations, reduced weight penalty and easy manufacturing, amongst other issues. In the past decades, the Finite Element Method (FEM) has been used for the simulation and strength prediction of bonded structures, by strength of materials or fracture mechanics-based criteria. Cohesive-zone models (CZMs) have already proved to be an effective tool in modelling damage growth, surpassing a few limitations of the aforementioned techniques. Despite this fact, they still suffer from the restriction of damage growth only at predefined growth paths. The eXtended Finite Element Method (XFEM) is a recent improvement of the FEM, developed to allow the growth of discontinuities within bulk solids along an arbitrary path, by enriching degrees of freedom with special displacement functions, thus overcoming the main restriction of CZMs. These two techniques were tested to simulate adhesively bonded single- and double-lap joints. The comparative evaluation of the two methods showed their capabilities and/or limitations for this specific purpose.
Resumo:
Dependability is a critical factor in computer systems, requiring high quality validation & verification procedures in the development stage. At the same time, digital devices are getting smaller and access to their internal signals and registers is increasingly complex, requiring innovative debugging methodologies. To address this issue, most recent microprocessors include an on-chip debug (OCD) infrastructure to facilitate common debugging operations. This paper proposes an enhanced OCD infrastructure with the objective of supporting the verification of fault-tolerant mechanisms through fault injection campaigns. This upgraded on-chip debug and fault injection (OCD-FI) infrastructure provides an efficient fault injection mechanism with improved capabilities and dynamic behavior. Preliminary results show that this solution provides flexibility in terms of fault triggering and allows high speed real-time fault injection in memory elements
Resumo:
Fault injection is frequently used for the verification and validation of dependable systems. When targeting real time microprocessor based systems the process becomes significantly more complex. This paper proposes two complementary solutions to improve real time fault injection campaign execution, both in terms of performance and capabilities. The methodology is based on the use of the on-chip debug mechanisms present in modern electronic devices. The main objective is the injection of faults in microprocessor memory elements with minimum delay and intrusiveness. Different configurations were implemented and compared in terms of performance gain and logic overhead.
Resumo:
As electronic devices get smaller and more complex, dependability assurance is becoming fundamental for many mission critical computer based systems. This paper presents a case study on the possibility of using the on-chip debug infrastructures present in most current microprocessors to execute real time fault injection campaigns. The proposed methodology is based on a debugger customized for fault injection and designed for maximum flexibility, and consists of injecting bit-flip type faults on memory elements without modifying or halting the target application. The debugger design is easily portable and applicable to different architectures, providing a flexible and efficient mechanism for verifying and validating fault tolerant components.
Resumo:
The rapid increase in the use of microprocessor-based systems in critical areas, where failures imply risks to human lives, to the environment or to expensive equipment, significantly increased the need for dependable systems, able to detect, tolerate and eventually correct faults. The verification and validation of such systems is frequently performed via fault injection, using various forms and techniques. However, as electronic devices get smaller and more complex, controllability and observability issues, and sometimes real time constraints, make it harder to apply most conventional fault injection techniques. This paper proposes a fault injection environment and a scalable methodology to assist the execution of real-time fault injection campaigns, providing enhanced performance and capabilities. Our proposed solutions are based on the use of common and customized on-chip debug (OCD) mechanisms, present in many modern electronic devices, with the main objective of enabling the insertion of faults in microprocessor memory elements with minimum delay and intrusiveness. Different configurations were implemented starting from basic Components Off-The-Shelf (COTS) microprocessors, equipped with real-time OCD infrastructures, to improved solutions based on modified interfaces, and dedicated OCD circuitry that enhance fault injection capabilities and performance. All methodologies and configurations were evaluated and compared concerning performance gain and silicon overhead.
Resumo:
To increase the amount of logic available to the users in SRAM-based FPGAs, manufacturers are using nanometric technologies to boost logic density and reduce costs, making its use more attractive. However, these technological improvements also make FPGAs particularly vulnerable to configuration memory bit-flips caused by power fluctuations, strong electromagnetic fields and radiation. This issue is particularly sensitive because of the increasing amount of configuration memory cells needed to define their functionality. A short survey of the most recent publications is presented to support the options assumed during the definition of a framework for implementing circuits immune to bit-flips induction mechanisms in memory cells, based on a customized redundant infrastructure and on a detection-and-fix controller.
Resumo:
Fault injection is frequently used for the verification and validation of the fault tolerant features of microprocessors. This paper proposes the modification of a common on-chip debugging (OCD) infrastructure to add fault injection capabilities and improve performance. The proposed solution imposes a very low logic overhead and provides a flexible and efficient mechanism for the execution of fault injection campaigns, being applicable to different target system architectures.