995 resultados para criticality analysis


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Risk management in healthcare represents a group of various complex actions, implemented to improve the quality of healthcare services and guarantee the patients safety. Risks cannot be eliminated, but it can be controlled with different risk assessment methods derived from industrial applications and among these the Failure Mode Effect and Criticality Analysis (FMECA) is a largely used methodology. The main purpose of this work is the analysis of failure modes of the Home Care (HC) service provided by local healthcare unit of Naples (ASL NA1) to focus attention on human and non human factors according to the organization framework selected by WHO. © Springer International Publishing Switzerland 2014.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

O projeto de Estações de Tratamento de Efluentes Industriais (ETEIs) deve objetivar um desempenho médio, o qual não pode ser ultrapassado certo número de vezes durante seu tempo operacional. Este trabalho propõe a aplicação da metodologia conhecida como Coeficiente de Confiabilidade (CDC) para quantificação da confiabilidade das etapas de tratamento físico (Separação Águaóleo SAO e flotação) e biológico (lodos ativados com aeração prolongada), considerando efluente oleoso proveniente de refino de petróleo. Tal metodologia, entretanto, não possibilita a identificação das prováveis causas do baixo desempenho na tratabilidade. Por isso também é proposta a aplicação da ferramenta de gestão riscos conhecida como FMECA (Failure Modes, Effects and Criticality Analysis), que permite a quantificação das observações qualitativas de campo, tornando os valores comparáveis para definir a hierarquização dos riscos e criticidade das etapas de tratamento estudadas. A etapa biológica para o parâmetro NH3 apresentou a maior confiabilidade, ainda que a análise de risco tenha apontado esta etapa como mais crítica. Ou seja, um sistema confiável não necessariamente apresenta menor criticidade, pois uma má gestão implicará em possíveis infrações às metas pré-fixadas ou à própria legislação ambiental.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Hallitsematon ja reaktiivinen kunnossapito on eräs tuotannon suurimpia kustannustekijöistä. Suunnitelmallisesti ja systemaattisesti johdettuna kunnossapito on tuotantotehokkuuden suurin vaikuttaja. Merkittävä osa tuotannon tehokkuuden ylläpidosta saavutetaan laitteiden käyttövarmuudella. Käyttövarmuuden saaminen hallintaan perustuu ennakoivan kunnossapidon määrän kasvattamiseen. Samalla korjaavan kunnossapidon kustannusriski laskee ja siihen käytetty panos vähenee. Huonolla kunnossapidon suunnitelmallisuudella on päinvastaiset vaikutukset. Tavoitteena on määritellä prosessilaitteiden käyttövarmuuksiin perustuva laitekriittisyys. Tutkimuksessa yhdistetään riskien arviointimenetelmiä, joilla keskimääräiset vikavälit ja seuraukset valmistukseen mallinnetaan. Kriittisyystekijöitä ovat käytettävyys, luotettavuus, kustannustekijät, turvallisuus ja ympäristövaikutukset. Tekijöiden arvottamiseen kehitettiin riksianalyysitaulukko. Kriittisyysluokat jaettiin kolmeen kategoriaan, joista A on kriittisin, B keskinkertainen ja C on matalin luokka. Lähtötietojen keräys toteutettiin triangulaatiomenetelmää soveltaen. Empiirisessä osassa HKScan Oy:n lihanjalostustehtaan jauheliha- ja kestomakkaraosastojen laitteet jaettiin A-, B- ja C-luokkiin. Kriittisimpiä laitteita oli 20 prosenttia analysoidusta laitemäärästä. Nämä A-luokkaan sijoitetut laitteet aiheuttavat 80 prosenttia kustannusriskeistä. B-luokkaan kuuluu 50 prosenttia ja C-luokkaan 30 prosenttia laitteista. Luokittelusta erotettiin havaitut turvallisuusriskit riskienhallinnan toimenpiteitä varten. Kustannustietoinen kriittisyysluokittelu on pohja kunnossapitostrategian rakentamiselle. Tämän avuksi esitettiin taulukot huolto-ohjelman luomiseen ja luokituksien hyödyntämiseen päivittäisessä toiminnassa.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis develops and validates the framework of a specialized maintenance decision support system for a discrete part manufacturing facility. Its construction utilizes a modular approach based on the fundamental philosophy of Reliability Centered Maintenance (RCM). The proposed architecture uniquely integrates System Decomposition, System Evaluation, Failure Analysis, Logic Tree Analysis, and Maintenance Planning modules. It presents an ideal solution to the unique maintenance inadequacies of modern discrete part manufacturing systems. Well established techniques are incorporated as building blocks of the system's modules. These include Failure Mode Effect and Criticality Analysis (FMECA), Logic Tree Analysis (LTA), Theory of Constraints (TOC), and an Expert System (ES). A Maintenance Information System (MIS) performs the system's support functions. Validation was performed by field testing of the system at a Miami based manufacturing facility. Such a maintenance support system potentially reduces downtime losses and contributes to higher product quality output. Ultimately improved profitability is the final outcome. ^

Relevância:

40.00% 40.00%

Publicador:

Resumo:

There is an increasing concern to reduce the cost and overheads during the development of reliable systems. Selective protection of most critical parts of the systems represents a viable solution to obtain a high level of reliability at a fraction of the cost. In particular to design a selective fault mitigation strategy for processor-based systems, it is mandatory to identify and prioritize the most vulnerable registers in the register file as best candidates to be protected (hardened). This paper presents an application-based metric to estimate the criticality of each register from the microprocessor register file in microprocessor-based systems. The proposed metric relies on the combination of three different criteria based on common features of executed applications. The applicability and accuracy of our proposal have been evaluated in a set of applications running in different microprocessors. Results show a significant improvement in accuracy compared to previous approaches and regardless of the underlying architecture.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bridges are currently rated individually for maintenance and repair action according to the structural conditions of their elements. Dealing with thousands of bridges and the many factors that cause deterioration, makes this rating process extremely complicated. The current simplified but practical methods are not accurate enough. On the other hand, the sophisticated, more accurate methods are only used for a single or particular bridge type. It is therefore necessary to develop a practical and accurate rating system for a network of bridges. The first most important step in achieving this aim is to classify bridges based on the differences in nature and the unique characteristics of the critical factors and the relationship between them, for a network of bridges. Critical factors and vulnerable elements will be identified and placed in different categories. This classification method will be used to develop a new practical rating method for a network of railway bridges based on criticality and vulnerability analysis. This rating system will be more accurate and economical as well as improve the safety and serviceability of railway bridges.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conditions of bridges deteriorate with age, due to different critical factors including, changes in loading, fatigue, environmental effects and natural events. In order to rate a network of bridges, based on their structural condition, the condition of the components of a bridge and their effects on behaviour of the bridge should be reliably estimated. In this paper, a new method for quantifying the criticality and vulnerability of the components of the railway bridges in a network will be introduced. The type of structural analyses for identifying the criticality of the components for carrying train loads will be determined. In addition to that, the analytical methods for identifying the vulnerability of the components to natural events whose probability of occurrence is important, such as, flood, wind, earthquake and collision will be determined. In order to maintain the practicality of this method to be applied to a network of thousands of railway bridges, the simplicity of structural analysis has been taken into account. Demand by capacity ratios of the components at both safety and serviceability condition states as well as weighting factors used in current bridge management systems (BMS) are taken into consideration. It will be explained what types of information related to the structural condition of a bridge is required to be obtained, recorded and analysed. The authors of this paper will use this method in a new rating system introduced previously. Enhancing accuracy and reliability of evaluating and predicting the vulnerability of railway bridges to environmental effects and natural events will be the significant achievement of this research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

During the last two decades, analysis of 1/f noise in cognitive science has led to a considerable progress in the way we understand the organization of our mental life. However, there is still a lack of specific models providing explanations of how 1/f noise is generated in coupled brain-body-environment systems, since existing models and experiments typically target either externally observable behaviour or isolated neuronal systems but do not address the interplay between neuronal mechanisms and sensorimotor dynamics. We present a conceptual model of a minimal neurorobotic agent solving a behavioural task that makes it possible to relate mechanistic (neurodynamic) and behavioural levels of description. The model consists of a simulated robot controlled by a network of Kuramoto oscillators with homeostatic plasticity and the ability to develop behavioural preferences mediated by sensorimotor patterns. With only three oscillators, this simple model displays self-organized criticality in the form of robust 1/f noise and a wide multifractal spectrum. We show that the emergence of self-organized criticality and 1/f noise in our model is the result of three simultaneous conditions: a) non-linear interaction dynamics capable of generating stable collective patterns, b) internal plastic mechanisms modulating the sensorimotor flows, and c) strong sensorimotor coupling with the environment that induces transient metastable neurodynamic regimes. We carry out a number of experiments to show that both synaptic plasticity and strong sensorimotor coupling play a necessary role, as constituents of self-organized criticality, in the generation of 1/f noise. The experiments also shown to be useful to test the robustness of 1/f scaling comparing the results of different techniques. We finally discuss the role of conceptual models as mediators between nomothetic and mechanistic models and how they can inform future experimental research where self-organized critically includes sensorimotor coupling among the essential interaction-dominant process giving rise to 1/f noise.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Some 50,000 Win Studies in Chess challenge White to find an effectively unique route to a win. Judging the impact of less than absolute uniqueness requires both technical analysis and artistic judgment. Here, for the first time, an algorithm is defined to help analyse uniqueness in endgame positions objectively. The key idea is to examine how critical certain positions are to White in achieving the win. The algorithm uses sub-n-man endgame tables (EGTs) for both Chess and relevant, adjacent variants of Chess. It challenges authors of EGT generators to generalise them to create EGTs for these chess variants. It has already proved efficient and effective in an implementation for Starchess, itself a variant of chess. The approach also addresses a number of similar questions arising in endgame theory, games and compositions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

3D video-fluoroscopy is an accurate but cumbersome technique to estimate natural or prosthetic human joint kinematics. This dissertation proposes innovative methodologies to improve the 3D fluoroscopic analysis reliability and usability. Being based on direct radiographic imaging of the joint, and avoiding soft tissue artefact that limits the accuracy of skin marker based techniques, the fluoroscopic analysis has a potential accuracy of the order of mm/deg or better. It can provide fundamental informations for clinical and methodological applications, but, notwithstanding the number of methodological protocols proposed in the literature, time consuming user interaction is exploited to obtain consistent results. The user-dependency prevented a reliable quantification of the actual accuracy and precision of the methods, and, consequently, slowed down the translation to the clinical practice. The objective of the present work was to speed up this process introducing methodological improvements in the analysis. In the thesis, the fluoroscopic analysis was characterized in depth, in order to evaluate its pros and cons, and to provide reliable solutions to overcome its limitations. To this aim, an analytical approach was followed. The major sources of error were isolated with in-silico preliminary studies as: (a) geometric distortion and calibration errors, (b) 2D images and 3D models resolutions, (c) incorrect contour extraction, (d) bone model symmetries, (e) optimization algorithm limitations, (f) user errors. The effect of each criticality was quantified, and verified with an in-vivo preliminary study on the elbow joint. The dominant source of error was identified in the limited extent of the convergence domain for the local optimization algorithms, which forced the user to manually specify the starting pose for the estimating process. To solve this problem, two different approaches were followed: to increase the optimal pose convergence basin, the local approach used sequential alignments of the 6 degrees of freedom in order of sensitivity, or a geometrical feature-based estimation of the initial conditions for the optimization; the global approach used an unsupervised memetic algorithm to optimally explore the search domain. The performances of the technique were evaluated with a series of in-silico studies and validated in-vitro with a phantom based comparison with a radiostereometric gold-standard. The accuracy of the method is joint-dependent, and for the intact knee joint, the new unsupervised algorithm guaranteed a maximum error lower than 0.5 mm for in-plane translations, 10 mm for out-of-plane translation, and of 3 deg for rotations in a mono-planar setup; and lower than 0.5 mm for translations and 1 deg for rotations in a bi-planar setups. The bi-planar setup is best suited when accurate results are needed, such as for methodological research studies. The mono-planar analysis may be enough for clinical application when the analysis time and cost may be an issue. A further reduction of the user interaction was obtained for prosthetic joints kinematics. A mixed region-growing and level-set segmentation method was proposed and halved the analysis time, delegating the computational burden to the machine. In-silico and in-vivo studies demonstrated that the reliability of the new semiautomatic method was comparable to a user defined manual gold-standard. The improved fluoroscopic analysis was finally applied to a first in-vivo methodological study on the foot kinematics. Preliminary evaluations showed that the presented methodology represents a feasible gold-standard for the validation of skin marker based foot kinematics protocols.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Burn-up credit analyses are based on depletion calculations that provide an accurate prediction of spent fuel isotopic contents, followed by criticality calculations to assess keff

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the framework of the OECD/NEA project on Benchmark for Uncertainty Analysis in Modeling (UAM) for Design, Operation, and Safety Analysis of LWRs, several approaches and codes are being used to deal with the exercises proposed in Phase I, “Specifications and Support Data for Neutronics Cases.” At UPM, our research group treats these exercises with sensitivity calculations and the “sandwich formula” to propagate cross-section uncertainties. Two different codes are employed to calculate the sensitivity coefficients of to cross sections in criticality calculations: MCNPX-2.7e and SCALE-6.1. The former uses the Differential Operator Technique and the latter uses the Adjoint-Weighted Technique. In this paper, the main results for exercise I-2 “Lattice Physics” are presented for the criticality calculations of PWR. These criticality calculations are done for a TMI fuel assembly at four different states: HZP-Unrodded, HZP-Rodded, HFP-Unrodded, and HFP-Rodded. The results of the two different codes above are presented and compared. The comparison proves a good agreement between SCALE-6.1 and MCNPX-2.7e in uncertainty that comes from the sensitivity coefficients calculated by both codes. Differences are found when the sensitivity profiles are analysed, but they do not lead to differences in the uncertainty.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Numerical simulations of axisymmetric reactive jets with one-step Arrhenius kinetics are used to investigate the problem of deflagration initiation in a premixed fuel–air mixture by the sudden discharge of a hot jet of its adiabatic reaction products. For the moderately large values of the jet Reynolds number considered in the computations, chemical reaction is seen to occur initially in the thin mixing layer that separates the hot products from the cold reactants. This mixing layer is wrapped around by the starting vortex, thereby enhancing mixing at the jet head, which is followed by an annular mixing layer that trails behind, connecting the leading vortex with the orifice rim. A successful deflagration is seen to develop for values of the orifice radius larger than a critical value a c in the order of the flame thickness of the planar deflagration δL. Introduction of appropriate scales provides the dimensionless formulation of the problem, with flame initiation characterised in terms of a critical Damköhler number Δc=(a d/δL)2, whose parametric dependence is investigated. The numerical computations reveal that, while the jet Reynolds number exerts a limited influence on the criticality conditions, the effect of the reactant diffusivity on ignition is much more pronounced, with the value of Δc increasing significantly with increasing Lewis numbers. The reactant diffusivity affects also the way ignition takes place, so that for reactants with the flame develops as a result of ignition in the annular mixing layer surrounding the developing jet stem, whereas for highly diffusive reactants with Lewis numbers sufficiently smaller than unity combustion is initiated in the mixed core formed around the starting vortex. The analysis provides increased understanding of deflagration initiation processes, including the effects of differential diffusion, and points to the need for further investigations corporating detailed chemistry models for specific fuel–air mixtures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Complexity originates from the tendency of large dynamical systems to organize themselves into a critical state, with avalanches or "punctuations" of all sizes. In the critical state, events which would otherwise be uncoupled become correlated. The apparent, historical contingency in many sciences, including geology, biology, and economics, finds a natural interpretation as a self-organized critical phenomenon. These ideas are discussed in the context of simple mathematical models of sandpiles and biological evolution. Insights are gained not only from numerical simulations but also from rigorous mathematical analysis.