985 resultados para Failure Scenarios


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Decision-making in product quality is an indispensable stage in product development, in order to reduce product development risk. Based on the identification of the deficiencies of quality function deployment (QFD) and failure modes and effects analysis (FMEA), a novel decision-making method is presented that draws upon a knowledge network of failure scenarios. An ontological expression of failure scenarios is presented together with a framework of failure knowledge network (FKN). According to the roles of quality characteristics (QCs) in failure processing, QCs are set into three categories namely perceptible QCs, restrictive QCs, and controllable QCs, which present the monitor targets, control targets and improvement targets respectively for quality management. A mathematical model and algorithms based on the analytic network process (ANP) is introduced for calculating the priority of QCs with respect to different development scenarios. A case study is provided according to the proposed decision-making procedure based on FKN. This methodology is applied in the propeller design process to solve the problem of prioritising QCs. This paper provides a practical approach for decision-making in product quality. Copyright © 2011 Inderscience Enterprises Ltd.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Fixation failure of glenoid components is the main cause of unsuccessful total shoulder arthroplasties. The characteristics of these failures are still not well understood, hence, attempts at improving the implant fixation are somewhat blind and the failure rate remains high. This lack of understanding is largely due to the fundamental problem that direct observations of failure are impossible as the fixation is inherently embedded within the bone. Twenty custom made implants, reflecting various common fixation designs, and a specimen set-up was prepared to enable direct observation of failure when the specimens were exposed to cyclic superior loads during laboratory experiments. Finite element analyses of the laboratory tests were also carried out to explain the observed failure scenarios. All implants, irrespective of the particular fixation design, failed at the implant-cement interface and failure initiated at the inferior part of the component fixation. Finite element analyses indicated that this failure scenario was caused by a weak and brittle implant-cement interface and tensile stresses in the inferior region possibly worsened by a stress raiser effect at the inferior rim. The results of this study indicate that glenoid failure can be delayed or prevented by improving the implant/cement interface strength. Also any design features that reduce the geometrical stress raiser and the inferior tensile stresses in general should delay implant loosening.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This study has established that the use of a computer model, the Anaerobic Digestion Model 1, is suitable for investigation of the stability and energy balance of the anaerobic digestion of food waste. In simulations, digestion of undiluted food waste was less stable than that of sewage sludge or mixtures of the two, but gave much higher average methane yields per volume of digester. In the best case scenario simulations, food waste resulted in the production of 5.3 Nm3 of methane per day per m3 of digester volume, much higher than that of sewage sludge alone at 1.1 Nm3 of methane per day per m3. There was no substantial difference in the yield per volatile solids added. Food waste, however, did not sustain a stable digestion if its cation content was below a certain level. Mixing food waste and sewage sludge allowed digestion with a lower cation content. The changes in composition of food waste feedstock caused great variation in biogas output and even more so volatile fatty acid concentration, which lowered the digestion stability. Modelling anaerobic digestion allowed simulation of failure scenarios and gave insights into the importance of the cation/anion balance and the magnitude of variability in feedstocks.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The emergent behaviour of autonomic systems, together with the scale of their deployment, impedes prediction of the full range of configuration and failure scenarios; thus it is not possible to devise management and recovery strategies to cover all possible outcomes. One solution to this problem is to embed self-managing and self-healing abilities into such applications. Traditional design approaches favour determinism, even when unnecessary. This can lead to conflicts between the non-functional requirements. Natural systems such as ant colonies have evolved cooperative, finely tuned emergent behaviours which allow the colonies to function at very large scale and to be very robust, although non-deterministic. Simple pheromone-exchange communication systems are highly efficient and are a major contribution to their success. This paper proposes that we look to natural systems for inspiration when designing architecture and communications strategies, and presents an election algorithm which encapsulates non-deterministic behaviour to achieve high scalability, robustness and stability.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background: Large-scale biological jobs on high-performance computing systems require manual intervention if one or more computing cores on which they execute fail. This places not only a cost on the maintenance of the job, but also a cost on the time taken for reinstating the job and the risk of losing data and execution accomplished by the job before it failed. Approaches which can proactively detect computing core failures and take action to relocate the computing core's job onto reliable cores can make a significant step towards automating fault tolerance. Method: This paper describes an experimental investigation into the use of multi-agent approaches for fault tolerance. Two approaches are studied, the first at the job level and the second at the core level. The approaches are investigated for single core failure scenarios that can occur in the execution of parallel reduction algorithms on computer clusters. A third approach is proposed that incorporates multi-agent technology both at the job and core level. Experiments are pursued in the context of genome searching, a popular computational biology application. Result: The key conclusion is that the approaches proposed are feasible for automating fault tolerance in high-performance computing systems with minimal human intervention. In a typical experiment in which the fault tolerance is studied, centralised and decentralised checkpointing approaches on an average add 90% to the actual time for executing the job. On the other hand, in the same experiment the multi-agent approaches add only 10% to the overall execution time

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Em todo o mundo são usados, hoje em dia, modelos numéricos hidrogeoquímicos para simular fenómenos naturais e fenómenos decorrentes de actividades antrópicas. Estes modelos ajudam-nos a compreender o ambiente envolvente, a sua variabilidade espacial e evolução temporal. No presente trabalho apresenta-se o desenvolvimento de modelos numéricos hidrogeoquímicos aplicados no contexto do repositório geológico profundo para resíduos nucleares de elevada actividade. A avaliação da performance de um repositório geológico profundo inclui o estudo da evolução geoquímica do repositório, bem como a análise dos cenários de mau funcionamento do repositório, e respectivas consequências ambientais. Se se escaparem acidentalmente radionuclídeos de um repositório, estes poderão atravessar as barreiras de engenharia e barreiras naturais que constituem o repositório, atingindo eventualmente, os ecosistemas superficiais. Neste caso, os sedimentos subsuperficiais constituem a última barreira natural antes dos ecosistemas superficiais. No presente trabalho foram desenvolvidos modelos numéricos que integram processos biogeoquímicos, geoquímicos, hidrodinâmicos e de transporte de solutos, para entender e quantificar a influência destes processos na mobilidade de radionuclídeos em sistemas subsuperficiais. Os resultados alcançados reflectem a robustez dos instrumentos numéricos utilizados para desenvolver simulações descritivas e predictivas de processos hidrogeoquímicos que influenciam a mobilidade de radionuclídeos. A simulação (descritiva) de uma experiência laboratorial revela que a actividade microbiana induz a diminuição do potencial redox da água subterrânea que, por sua vez, favorece a retenção de radionuclídeos sensíveis ao potencial redox, como o urânio. As simulações predictivas indicam que processos de co-precipitação com minerais de elementos maioritários, precipitação de fases puras, intercâmbio catiónico e adsorção à superfície de minerais favorecem a retenção de U, Cs, Sr e Ra na fase sólida de uma argila glaciar e uma moreia rica em calcite. A etiquetagem dos radionuclídeos nas simulações numéricas permitiu concluir que a diluição isotópica joga um papel importante no potencial impacte dos radionuclídeos nos sistemas subsuperficiais. A partir dos resultados das simulações numéricas é possivel calcular coeficientes de distribuição efectivos. Esta metodologia proporciona a simulação de ensaios de traçadores de longa duração que não seriam exequíveis à escala da vida humana. A partir destas simulações podem ser obtidos coeficientes de retardamento que são úteis no contexto da avaliação da performance de repositórios geológicos profundos.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia de Electrónica e Telecomunicações

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La survie des réseaux est un domaine d'étude technique très intéressant ainsi qu'une préoccupation critique dans la conception des réseaux. Compte tenu du fait que de plus en plus de données sont transportées à travers des réseaux de communication, une simple panne peut interrompre des millions d'utilisateurs et engendrer des millions de dollars de pertes de revenu. Les techniques de protection des réseaux consistent à fournir une capacité supplémentaire dans un réseau et à réacheminer les flux automatiquement autour de la panne en utilisant cette disponibilité de capacité. Cette thèse porte sur la conception de réseaux optiques intégrant des techniques de survie qui utilisent des schémas de protection basés sur les p-cycles. Plus précisément, les p-cycles de protection par chemin sont exploités dans le contexte de pannes sur les liens. Notre étude se concentre sur la mise en place de structures de protection par p-cycles, et ce, en supposant que les chemins d'opération pour l'ensemble des requêtes sont définis a priori. La majorité des travaux existants utilisent des heuristiques ou des méthodes de résolution ayant de la difficulté à résoudre des instances de grande taille. L'objectif de cette thèse est double. D'une part, nous proposons des modèles et des méthodes de résolution capables d'aborder des problèmes de plus grande taille que ceux déjà présentés dans la littérature. D'autre part, grâce aux nouveaux algorithmes, nous sommes en mesure de produire des solutions optimales ou quasi-optimales. Pour ce faire, nous nous appuyons sur la technique de génération de colonnes, celle-ci étant adéquate pour résoudre des problèmes de programmation linéaire de grande taille. Dans ce projet, la génération de colonnes est utilisée comme une façon intelligente d'énumérer implicitement des cycles prometteurs. Nous proposons d'abord des formulations pour le problème maître et le problème auxiliaire ainsi qu'un premier algorithme de génération de colonnes pour la conception de réseaux protegées par des p-cycles de la protection par chemin. L'algorithme obtient de meilleures solutions, dans un temps raisonnable, que celles obtenues par les méthodes existantes. Par la suite, une formulation plus compacte est proposée pour le problème auxiliaire. De plus, nous présentons une nouvelle méthode de décomposition hiérarchique qui apporte une grande amélioration de l'efficacité globale de l'algorithme. En ce qui concerne les solutions en nombres entiers, nous proposons deux méthodes heurisiques qui arrivent à trouver des bonnes solutions. Nous nous attardons aussi à une comparaison systématique entre les p-cycles et les schémas classiques de protection partagée. Nous effectuons donc une comparaison précise en utilisant des formulations unifiées et basées sur la génération de colonnes pour obtenir des résultats de bonne qualité. Par la suite, nous évaluons empiriquement les versions orientée et non-orientée des p-cycles pour la protection par lien ainsi que pour la protection par chemin, dans des scénarios de trafic asymétrique. Nous montrons quel est le coût de protection additionnel engendré lorsque des systèmes bidirectionnels sont employés dans de tels scénarios. Finalement, nous étudions une formulation de génération de colonnes pour la conception de réseaux avec des p-cycles en présence d'exigences de disponibilité et nous obtenons des premières bornes inférieures pour ce problème.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The work reported in this paper is motivated towards handling single node failures for parallel summation algorithms in computer clusters. An agent based approach is proposed in which a task to be executed is decomposed to sub-tasks and mapped onto agents that traverse computing nodes. The agents intercommunicate across computing nodes to share information during the event of a predicted node failure. Two single node failure scenarios are considered. The Message Passing Interface is employed for implementing the proposed approach. Quantitative results obtained from experiments reveal that the agent based approach can handle failures more efficiently than traditional failure handling approaches.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Le processus de planification forestière hiérarchique présentement en place sur les terres publiques risque d’échouer à deux niveaux. Au niveau supérieur, le processus en place ne fournit pas une preuve suffisante de la durabilité du niveau de récolte actuel. À un niveau inférieur, le processus en place n’appuie pas la réalisation du plein potentiel de création de valeur de la ressource forestière, contraignant parfois inutilement la planification à court terme de la récolte. Ces échecs sont attribuables à certaines hypothèses implicites au modèle d’optimisation de la possibilité forestière, ce qui pourrait expliquer pourquoi ce problème n’est pas bien documenté dans la littérature. Nous utilisons la théorie de l’agence pour modéliser le processus de planification forestière hiérarchique sur les terres publiques. Nous développons un cadre de simulation itératif en deux étapes pour estimer l’effet à long terme de l’interaction entre l’État et le consommateur de fibre, nous permettant ainsi d’établir certaines conditions pouvant mener à des ruptures de stock. Nous proposons ensuite une formulation améliorée du modèle d’optimisation de la possibilité forestière. La formulation classique du modèle d’optimisation de la possibilité forestière (c.-à-d., maximisation du rendement soutenu en fibre) ne considère pas que le consommateur de fibre industriel souhaite maximiser son profit, mais suppose plutôt la consommation totale de l’offre de fibre à chaque période, peu importe le potentiel de création de valeur de celle-ci. Nous étendons la formulation classique du modèle d’optimisation de la possibilité forestière afin de permettre l’anticipation du comportement du consommateur de fibre, augmentant ainsi la probabilité que l’offre de fibre soit entièrement consommée, rétablissant ainsi la validité de l’hypothèse de consommation totale de l’offre de fibre implicite au modèle d’optimisation. Nous modélisons la relation principal-agent entre le gouvernement et l’industrie à l’aide d’une formulation biniveau du modèle optimisation, où le niveau supérieur représente le processus de détermination de la possibilité forestière (responsabilité du gouvernement), et le niveau inférieur représente le processus de consommation de la fibre (responsabilité de l’industrie). Nous montrons que la formulation biniveau peux atténuer le risque de ruptures de stock, améliorant ainsi la crédibilité du processus de planification forestière hiérarchique. Ensemble, le modèle biniveau d’optimisation de la possibilité forestière et la méthodologie que nous avons développée pour résoudre celui-ci à l’optimalité, représentent une alternative aux méthodes actuellement utilisées. Notre modèle biniveau et le cadre de simulation itérative représentent un pas vers l’avant en matière de technologie de planification forestière axée sur la création de valeur. L’intégration explicite d’objectifs et de contraintes industrielles au processus de planification forestière, dès la détermination de la possibilité forestière, devrait favoriser une collaboration accrue entre les instances gouvernementales et industrielles, permettant ainsi d’exploiter le plein potentiel de création de valeur de la ressource forestière.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Transportation system resilience has been the subject of several recent studies. To assess the resilience of a transportation network, however, it is essential to model its interactions with and reliance on other lifelines. In this work, a bi-level, mixed-integer, stochastic program is presented for quantifying the resilience of a coupled traffic-power network under a host of potential natural or anthropogenic hazard-impact scenarios. A two-layer network representation is employed that includes details of both systems. Interdependencies between the urban traffic and electric power distribution systems are captured through linking variables and logical constraints. The modeling approach was applied on a case study developed on a portion of the signalized traffic-power distribution system in southern Minneapolis. The results of the case study show the importance of explicitly considering interdependencies between critical infrastructures in transportation resilience estimation. The results also provide insights on lifeline performance from an alternative power perspective.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Optimal operation and maintenance of engineering systems heavily rely on the accurate prediction of their failures. Most engineering systems, especially mechanical systems, are susceptible to failure interactions. These failure interactions can be estimated for repairable engineering systems when determining optimal maintenance strategies for these systems. An extended Split System Approach is developed in this paper. The technique is based on the Split System Approach and a model for interactive failures. The approach was applied to simulated data. The results indicate that failure interactions will increase the hazard of newly repaired components. The intervals of preventive maintenance actions of a system with failure interactions, will become shorter compared with scenarios where failure interactions do not exist.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Future air traffic management concepts often involve the proposal of automated separation management algorithms that replaces human air traffic controllers. This paper proposes a new type of automated separation management algorithm (based on the satisficing approach) that utilizes inter-aircraft communication and a track file manager (or bank of Kalman filters) that is capable of resolving conflicts during periods of communication failure. The proposed separation management algorithm is tested in a range of flight scenarios involving during periods of communication failure, in both simulation and flight test (flight tests were conducted as part of the Smart Skies project). The intention of the conducted flight tests was to investigate the benefits of using inter-aircraft communication to provide an extra layer of safety protection in support air traffic management during periods of failure of the communication network. These benefits were confirmed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

"Defrauding land titles systems impacts upon us all. Those who deal in land include ordinary citizens, big business, small business, governments, not-for-profit organisation, deceased estates...Fraud here touches almost everybody." the thesis presented in this paper is that the current and disparate steps taken by jurisdictions to alleviate land fraud associated with identity-based crimes are inadequate. The centrepiece of the analysis is the consideration of two scenarios that have recently occurred. One is the typical scenario where a spouse forges the partner's signature to obtain a mortgage from a financial institution. The second is atypical. It involves a sophisticated overseas fraud duping many stakeholders involved in the conveyancing process. After outlining these scenarios, we will examine how identity verification requirements of the United Kingdom, Ontario, the Australian states, and New Zealand would have been applied to these two frauds. Our conclusion is that even though some jurisdictions may have prevented the frauds from occurring, the current requirements are inadequate. We use the lessons learnt to propose what we consider core principles for identity verification in land transactions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ambiguity resolution plays a crucial role in real time kinematic GNSS positioning which gives centimetre precision positioning results if all the ambiguities in each epoch are correctly fixed to integers. However, the incorrectly fixed ambiguities can result in large positioning offset up to several meters without notice. Hence, ambiguity validation is essential to control the ambiguity resolution quality. Currently, the most popular ambiguity validation is ratio test. The criterion of ratio test is often empirically determined. Empirically determined criterion can be dangerous, because a fixed criterion cannot fit all scenarios and does not directly control the ambiguity resolution risk. In practice, depending on the underlying model strength, the ratio test criterion can be too conservative for some model and becomes too risky for others. A more rational test method is to determine the criterion according to the underlying model and user requirement. Miss-detected incorrect integers will lead to a hazardous result, which should be strictly controlled. In ambiguity resolution miss-detected rate is often known as failure rate. In this paper, a fixed failure rate ratio test method is presented and applied in analysis of GPS and Compass positioning scenarios. A fixed failure rate approach is derived from the integer aperture estimation theory, which is theoretically rigorous. The criteria table for ratio test is computed based on extensive data simulations in the approach. The real-time users can determine the ratio test criterion by looking up the criteria table. This method has been applied in medium distance GPS ambiguity resolution but multi-constellation and high dimensional scenarios haven't been discussed so far. In this paper, a general ambiguity validation model is derived based on hypothesis test theory, and fixed failure rate approach is introduced, especially the relationship between ratio test threshold and failure rate is examined. In the last, Factors that influence fixed failure rate approach ratio test threshold is discussed according to extensive data simulation. The result shows that fixed failure rate approach is a more reasonable ambiguity validation method with proper stochastic model.