12 resultados para failure analysis strategy
em University of Queensland eSpace - Australia
Resumo:
Experiments with simulators allow psychologists to better understand the causes of human errors and build models of cognitive processes to be used in human reliability assessment (HRA). This paper investigates an approach to task failure analysis based on patterns of behaviour, by contrast to more traditional event-based approaches. It considers, as a case study, a formal model of an air traffic control (ATC) system which incorporates controller behaviour. The cognitive model is formalised in the CSP process algebra. Patterns of behaviour are expressed as temporal logic properties. Then a model-checking technique is used to verify whether the decomposition of the operator's behaviour into patterns is sound and complete with respect to the cognitive model. The decomposition is shown to be incomplete and a new behavioural pattern is identified, which appears to have been overlooked in the analysis of the data provided by the experiments with the simulator. This illustrates how formal analysis of operator models can yield fresh insights into how failures may arise in interactive systems.
Resumo:
We report a method using variation in the chloroplast genome (cpDNA) to test whether oak stands of unknown provenance are of native and/or local origin. As an example, a sample of test oaks, of mostly unknown status in relation to nativeness and localness, were surveyed for cpDNA type. The sample comprised 126 selected trees, derived from 16 British seed stands, and 75 trees, selected for their superior phenotype (201 tree samples in total). To establish whether these two test groups are native and local, their cpDNA type was compared with that of material from known autochthonous origin (results of a previous study which examined variation in 1076 trees from 224 populations distributed across Great Britain). In the previous survey of autochthonous material, four cpDNA types were identified as native; thus if a test sample possessed a new haplotype then it could be classed as non-native. Every one of the 201 test samples possessed one of the four cpDNA types found within the autochthonous sample. Therefore none could be proven to be introduced and, on this basis, was considered likely to be native. The previous study of autochthonous material also found that cpDNA variation was highly structured geographically and, therefore, if the cpDNA type of the test sample did not match that of neighbouring autochthonous trees then it could be considered to be non-local. A high proportion of the seed stand group (44.2 per cent) and the phenotypically superior trees (58.7 per cent) possessed a cpDNA haplotype which matched that of the neighbouring autochthonous trees and, therefore, can be considered as local, or at least cannot be proven to be introduced. The remainder of the test sample could be divided into those which did not grow in an area of overall dominance (18.7 per cent of seed stand trees and 28 per cent of phenotypically superior) and those which failed to match the neighbouring autochthonous haplotype (37.1 per cent and 13.3 per cent, respectively). Most of the non-matching test samples were located within 50 km of an area dominated by a matching autochthonous haplotype (96.0 per cent and 93.5 per cent, respectively), and potentially indicates only local transfer. Whilst such genetic fingerprinting tests have proven useful for assessing the origin of stands of unknown provenance, there are potential limitations to using a marker from the chloroplast genome (mostly adaptively neutral) for classifying seed material into categories which have adaptive implications. These limitations are discussed, particularly within the context of selecting adaptively superior material for restocking native forests.
Resumo:
Study Design. Quiet stance on supporting bases with different lengths and with different visual inputs were tested in 24 study participants with chronic low back pain (LBP) and 24 matched control subjects. Objectives. To evaluate postural adjustment strategies and visual dependence associated with LBP. Summary of Background Data. Various studies have identified balance impairments in patients with chronic LBP, with many possible causes suggested. Recent evidence indicates that study participants with LBP have impaired trunk muscle control, which may compromise the control of trunk and hip movement during postural adjustments ( e. g., hip strategy). As balance on a short base emphasizes the utilization of the hip strategy for balance control, we hypothesized that patients with LBP might have difficulties standing on short bases. Methods. Subjects stood on either flat surface or short base with different visual inputs. A task was counted as successful if balance was maintained for 70 seconds during bilateral stance and 30 seconds during unilateral stance. The number of successful tasks, horizontal shear force, and center-of-pressure motion were evaluated. Results. The hip strategy was reduced with increased visual dependence in study participants with LBP. The failure rate was more than 4 times that of the controls in the bilateral standing task on short base with eyes closed. Analysis of center-of-pressure motion also showed that they have inability to initiate and control a hip strategy. Conclusions. The inability to control a hip strategy indicates a deficit of postural control and is hypothesized to result from altered muscle control and proprioceptive impairment.
Resumo:
In vitro evolution imitates the natural evolution of genes and has been very successfully applied to the modification of coding sequences, but it has not yet been applied to promoter sequences. We propose an alternative method for functional promoter analysis by applying an in vitro evolution scheme consisting of rounds of error-prone PCR, followed by DNA shuffling and selection of mutant promoter activities. We modified the activity in embryogenic sugarcane cells of the promoter region of the Goldfinger isolate of banana streak virus and obtained mutant promoter sequences that showed an average mutation rate of 2.5% after applying one round of error-prone PCR and DNA shuffling. Selection and sequencing of promoter sequences with decreased or unaltered activity allowed us to rapidly map the position of one cis-acting element that influenced promoter activity in embryogenic sugarcane cells and to discover neutral mutations that did not affect promoter Junction. The selective-shotgun approach of this promoter analysis method immediately after the promoter boundaries have been defined by 5' deletion analysis dramatically reduces the labor associated with traditional linker-scanning deletion analysis to reveal the position of functional promoter domains. Furthermore, this method allows the entire promoter to be investigated at once, rather than selected domains or nucleotides, increasing the, prospect of identifying interacting promoter regions.
Resumo:
The advent of molecular markers as a tool to aid selection has provided plant breeders with the opportunity to rapidly deliver superior genetic solutions to problems in agricultural production systems. However, a major constraint to the implementation of marker-assisted selection (MAS) in pragmatic breeding programs in the past has been the perceived high relative cost of MAS compared to conventional phenotypic selection. In this paper, computer simulation was used to design a genetically effective and economically efficient marker-assisted breeding strategy aimed at a specific outcome. Under investigation was a strategy involving the integration of both restricted backcrossing and doubled haploid (DH) technology. The point at which molecular markers are applied in a selection strategy can be critical to the effectiveness and cost efficiency of that strategy. The application of molecular markers was considered at three phases in the strategy: allele enrichment in the BC1F1 population, gene selection at the haploid stage and the selection for recurrent parent background of DHs prior to field testing. Overall, incorporating MAS at all three stages was the most effective, in terms of delivering a high frequency of desired outcomes and at combining the selected favourable rust resistance, end use quality and grain yield alleles. However, when costs were included in the model the combination of MAS at the BC1F1 and haploid stage was identified as the optimal strategy. A detailed economic analysis showed that incorporation of marker selection at these two stages not only increased genetic gain over the phenotypic alternative but actually reduced the over all cost by 40%.
Resumo:
Background: The aim of this study was to determine the effects of carvedilol on the costs related to the treatment of severe chronic heart failure (CHF). Methods: Costs for the treatment for heart failure within the National Health Service (NHS) in the United Kingdom (UK) were applied to resource utilisation data prospectively collected in all patients randomized into the Carvedilol Prospective Randomized Cumulative Survival (COPERNICUS) Study. Unit-specific, per them (hospital bed day) costs were used to calculate expenditures due to hospitalizations. We also included costs of carvedilol treatment, general practitioner surgery/office visits, hospital out-patient clinic visits and nursing home care based on estimates derived from validated patterns of clinical practice in the UK. Results: The estimated cost of carvedilol therapy and related ambulatory care for the 1156 patients assigned to active treatment was 530,771 pound (44.89 pound per patient/month of follow-up). However, patients assigned to carvedilol were hospitalised less often and accumulated fewer and less expensive days of admission. Consequently, the total estimated cost of hospital care was 3.49 pound million in the carvedilol group compared with 4.24 pound million for the 1133 patients in the placebo arm. The cost of post-discharge care was also less in the carvedilol than in the placebo group (479,200 pound vs. 548,300) pound. Overall, the cost per patient treated in the carvedilol group was 3948 pound compared to 4279 pound in the placebo group. This equated to a cost of 385.98 pound vs. 434.18 pound, respectively, per patient/month of follow-up: an 11.1% reduction in health care costs in favour of carvedilol. Conclusions: These findings suggest that not only can carvedilol treatment increase survival and reduce hospital admissions in patients with severe CHF but that it can also cut costs in the process.
Resumo:
Objective To assess whether trends in mortality from heart failure(HF) in Australia are due to a change in awareness of the condition or real changes in its epidemiology. Methods We carried out a retrospective analysis of official data on national mortality data between 1997 and 2003. A death was attributed to HF if the death certificate mentioned HF as either the underlying cause of death (UCD) or among the contributory factors. Findings From a total of 907 242 deaths, heart failure was coded as the UCD for 29 341 (3.2%) and was mentioned anywhere on the death certificate in 135 268 (14.9%). Between 1997 and 2003, there were decreases in the absolute numbers of deaths and in the age-specific and age-standardized mortality rates for HF either as UCD or mentioned anywhere for both sexes. HF was mentioned for 24.6% and 17.8% of deaths attributed to ischaemic heart disease and circulatory disease, respectively, and these proportions remained unchanged over the period of study. In addition, HF as UCD accounted for 8.3% of deaths attributed to circulatory disease and this did not change materially from 1997 to 2003. Conclusion The decline in mortality from HF measured as either number of deaths or rate probably reflects a real change in the epidemiology of HF. Population-based studies are required to determine accurately the contributions of changes in incidence, survival and demographic factors to the evolving epidemiology of HF.
Resumo:
Recent terrorist events in the UK, such as the security alerts at British airports in August 2006 and the London bombings of July 2005 gained extensive media and academic analysis. This study contends, however, that much of the commentary demonstrated a wide degree of failure among government agencies, academic and analytic experts and the wider media, about the nature of the threat and continues to distort comprehension of the extant danger. The principal failure, this argument maintains, was, and continues to be, one of an asymmetry of comprehension that mistakes the still relatively limited means of violent jihadist radicals with limited political ends. The misapprehension often stems from the language that surrounds the idea of 'terrorism', which increasingly restricts debate to an intellectually redundant search for the 'root causes' that give rise to the politics of complacency. In recent times this outlook has consistently underestimated the level of the threat to the security of the UK. This article argues that a more realistic appreciation of the current security condition requires abandoning the prevailing view that the domestic threat is best prosecuted as a criminal conspiracy. It demands instead a total strategy to deal with a totalizing threat. The empirical evidence demonstrates the existence of a physical threat, not merely the political fear of threat. The implementation of a coherent set of social policies for confronting the threat at home recognizes that securing state borders and maintaining internal stability are the first tasks of government. Fundamentally, this requires a return to an understanding of the Hobbesian conditions for sovereignty, which, despite the delusions of post-Cold War cosmopolitan multiculturalism, never went away.
Resumo:
Formal methods have significant benefits for developing safety critical systems, in that they allow for correctness proofs, model checking safety and liveness properties, deadlock checking, etc. However, formal methods do not scale very well and demand specialist skills, when developing real-world systems. For these reasons, development and analysis of large-scale safety critical systems will require effective integration of formal and informal methods. In this paper, we use such an integrative approach to automate Failure Modes and Effects Analysis (FMEA), a widely used system safety analysis technique, using a high-level graphical modelling notation (Behavior Trees) and model checking. We inject component failure modes into the Behavior Trees and translate the resulting Behavior Trees to SAL code. This enables us to model check if the system in the presence of these faults satisfies its safety properties, specified by temporal logic formulas. The benefit of this process is tool support that automates the tedious and error-prone aspects of FMEA.