17 resultados para FAILURE ANALYSIS
em Aston University Research Archive
Resumo:
The purlin-sheeting system has been the subject of numerous theoretical and experimental investigations over the past 30 years, but the complexity of the problem has led to great difficulty in developing a sound and general model. The primary aim of the thesis is to investigate the failure behaviours of cold-formed zed and channel sections for use in purlin-sheeting systems. Both the energy method and finite strip method are used to develop an approach to investigate cold-formed zed and channel section beams with partial-lateral restraint from the metal sheeting when subjected to a uniformly distributed transverse load. The stress analysis of cold-formed zed and channel section beams with partially-lateral restraint from the metal sheeting when subjected to a uniformly distributed transverse load is investigated firstly by using the analytical model based on the energy method in which the restraint actions of the sheeting are modelled by using two springs representing the translational and rotational restraints. The numerical results have showed that the two springs have significantly different influences on the stresses of the beams. The influence of the two springs has also been found to depend on the anti-sag bar and the position of the loading line. A novel method is presented for analysing the elastic local buckling behaviour of cold-formed zed and channel section beams with partial-lateral restraint from metal sheeting when subjected to a uniformly distributed transverse load, which is carried out by inputting the cross sectional stresses with the largest compressive stress into the finite strip analysis. By using the presented novel method, individual influences of warning stress, partially lateral restraints from the sheeting and the dimensions of the cross section and position of the loading line on the buckling behaviour are investigated.
Resumo:
Abstract (provisional): Background Failing a high-stakes assessment at medical school is a major event for those who go through the experience. Students who fail at medical school may be more likely to struggle in professional practice, therefore helping individuals overcome problems and respond appropriately is important. There is little understanding about what factors influence how individuals experience failure or make sense of the failing experience in remediation. The aim of this study was to investigate the complexity surrounding the failure experience from the student’s perspective using interpretative phenomenological analysis (IPA). Methods The accounts of 3 medical students who had failed final re-sit exams, were subjected to in-depth analysis using IPA methodology. IPA was used to analyse each transcript case-by-case allowing the researcher to make sense of the participant’s subjective world. The analysis process allowed the complexity surrounding the failure to be highlighted, alongside a narrative describing how students made sense of the experience. Results The circumstances surrounding students as they approached assessment and experienced failure at finals were a complex interaction between academic problems, personal problems (specifically finance and relationships), strained relationships with friends, family or faculty, and various mental health problems. Each student experienced multi-dimensional issues, each with their own individual combination of problems, but experienced remediation as a one-dimensional intervention with focus only on improving performance in written exams. What these students needed to be included was help with clinical skills, plus social and emotional support. Fear of termination of the their course was a barrier to open communication with staff. Conclusions These students’ experience of failure was complex. The experience of remediation is influenced by the way in which students make sense of failing. Generic remediation programmes may fail to meet the needs of students for whom personal, social and mental health issues are a part of the picture.
Resumo:
An inherent weakness in the management of large scale projects is the failure to achieve the scheduled completion date. When projects are planned with the objective of time achievement, the initial planning plays a vital role in the successful achievement of project deadlines. Cost and quality are additional priorities when such projects are being executed. This article proposes a methodology for achieving time duration of a project through risk analysis with the application of a Monte Carlo simulation technique. The methodology is demonstrated using a case application of a cross-country petroleum pipeline construction project.
Resumo:
Recent discussion of the knowledge-based economy draws increasingly attention to the role that the creation and management of knowledge plays in economic development. Development of human capital, the principal mechanism for knowledge creation and management, becomes a central issue for policy-makers and practitioners at the regional, as well as national, level. Facing competition both within and across nations, regional policy-makers view human capital development as a key to strengthening the positions of their economies in the global market. Against this background, the aim of this study is to go some way towards answering the question of whether, and how, investment in education and vocational training at regional level provides these territorial units with comparative advantages. The study reviews literature in economics and economic geography on economic growth (Chapter 2). In growth model literature, human capital has gained increased recognition as a key production factor along with physical capital and labour. Although leaving technical progress as an exogenous factor, neoclassical Solow-Swan models have improved their estimates through the inclusion of human capital. In contrast, endogenous growth models place investment in research at centre stage in accounting for technical progress. As a result, they often focus upon research workers, who embody high-order human capital, as a key variable in their framework. An issue of discussion is how human capital facilitates economic growth: is it the level of its stock or its accumulation that influences the rate of growth? In addition, these economic models are criticised in economic geography literature for their failure to consider spatial aspects of economic development, and particularly for their lack of attention to tacit knowledge and urban environments that facilitate the exchange of such knowledge. Our empirical analysis of European regions (Chapter 3) shows that investment by individuals in human capital formation has distinct patterns. Those regions with a higher level of investment in tertiary education tend to have a larger concentration of information and communication technology (ICT) sectors (including provision of ICT services and manufacture of ICT devices and equipment) and research functions. Not surprisingly, regions with major metropolitan areas where higher education institutions are located show a high enrolment rate for tertiary education, suggesting a possible link to the demand from high-order corporate functions located there. Furthermore, the rate of human capital development (at the level of vocational type of upper secondary education) appears to have significant association with the level of entrepreneurship in emerging industries such as ICT-related services and ICT manufacturing, whereas such association is not found with traditional manufacturing industries. In general, a high level of investment by individuals in tertiary education is found in those regions that accommodate high-tech industries and high-order corporate functions such as research and development (R&D). These functions are supported through the urban infrastructure and public science base, facilitating exchange of tacit knowledge. They also enjoy a low unemployment rate. However, the existing stock of human and physical capital in those regions with a high level of urban infrastructure does not lead to a high rate of economic growth. Our empirical analysis demonstrates that the rate of economic growth is determined by the accumulation of human and physical capital, not by level of their existing stocks. We found no significant effects of scale that would favour those regions with a larger stock of human capital. The primary policy implication of our study is that, in order to facilitate economic growth, education and training need to supply human capital at a faster pace than simply replenishing it as it disappears from the labour market. Given the significant impact of high-order human capital (such as business R&D staff in our case study) as well as the increasingly fast pace of technological change that makes human capital obsolete, a concerted effort needs to be made to facilitate its continuous development.
Resumo:
Derivational morphology proposes meaningful connections between words and is largely unrepresented in lexical databases. This thesis presents a project to enrich a lexical database with morphological links and to evaluate their contribution to disambiguation. A lexical database with sense distinctions was required. WordNet was chosen because of its free availability and widespread use. Its suitability was assessed through critical evaluation with respect to specifications and criticisms, using a transparent, extensible model. The identification of serious shortcomings suggested a portable enrichment methodology, applicable to alternative resources. Although 40% of the most frequent words are prepositions, they have been largely ignored by computational linguists, so addition of prepositions was also required. The preferred approach to morphological enrichment was to infer relations from phenomena discovered algorithmically. Both existing databases and existing algorithms can capture regular morphological relations, but cannot capture exceptions correctly; neither of them provide any semantic information. Some morphological analysis algorithms are subject to the fallacy that morphological analysis can be performed simply by segmentation. Morphological rules, grounded in observation and etymology, govern associations between and attachment of suffixes and contribute to defining the meaning of morphological relationships. Specifying character substitutions circumvents the segmentation fallacy. Morphological rules are prone to undergeneration, minimised through a variable lexical validity requirement, and overgeneration, minimised by rule reformulation and restricting monosyllabic output. Rules take into account the morphology of ancestor languages through co-occurrences of morphological patterns. Multiple rules applicable to an input suffix need their precedence established. The resistance of prefixations to segmentation has been addressed by identifying linking vowel exceptions and irregular prefixes. The automatic affix discovery algorithm applies heuristics to identify meaningful affixes and is combined with morphological rules into a hybrid model, fed only with empirical data, collected without supervision. Further algorithms apply the rules optimally to automatically pre-identified suffixes and break words into their component morphemes. To handle exceptions, stoplists were created in response to initial errors and fed back into the model through iterative development, leading to 100% precision, contestable only on lexicographic criteria. Stoplist length is minimised by special treatment of monosyllables and reformulation of rules. 96% of words and phrases are analysed. 218,802 directed derivational links have been encoded in the lexicon rather than the wordnet component of the model because the lexicon provides the optimal clustering of word senses. Both links and analyser are portable to an alternative lexicon. The evaluation uses the extended gloss overlaps disambiguation algorithm. The enriched model outperformed WordNet in terms of recall without loss of precision. Failure of all experiments to outperform disambiguation by frequency reflects on WordNet sense distinctions.
Resumo:
This research examines a behavioural based safety (BBS) intervention within a paper mill in the South East of England. Further to this intervention two other mills are examined for the purposes of comparison — one an established BBS programme and the other an improving safety management system through management ownership. BBS programmes have become popular within the UK, but most of the research about their efficacy is carried out by the BBS providers themselves. This thesis aims to evaluate a BBS intervention from a standpoint which is not commercially biased in favour of BBS schemes. The aim of a BBS scheme is to either change personnel behaviours or attitudes, which in turn will positively affect the organisation's safety culture. The research framework involved a qualitative methodology in order to examine the effects of the intervention on the paper mill's safety culture. The techniques used were questionnaires and semi structured interviews, in addition to observation and discussions which were possible because of the author's position as participant observer. The results demonstrated a failure to improve any aspect of the mill's safety culture, which worsened following the BBS intervention. Issues such as trust, morale, communication and support of management showed significant signs of negative workforce response. The paper mill where the safety management system approach was utilised demonstrated a significantly improved safety culture and achieved site ownership from middle managers and supervisors. Research has demonstrated that a solid foundation is required prior to successfully implementing a BBS programme. For a programme to work there must be middle management support in addition to senior management commitment. If a trade union actively distances itself from BBS, it is also unlikely to be effective. This thesis proposes that BBS observation programmes are not suitable for the papermaking industry, particularly when staffing levels are low due to challenging economic conditions. Observers are not available when there are high hazard situations and this suggests that BBS implementation is not the correct intervention for the paper industry.
Resumo:
In this Thesis, details of a proposed method for the elastic-plastic failure load analysis of complete building structures are given. In order to handle the problem, a computer programme in Atlas Autocode is produced. The structures consist of a number of parallel shear walls and intermediate frames connected by floor slabs. The results of an experimental investigation are given to verify the theoretical results and to demonstrate various factors that may influence the behaviour of these structures. Large full scale practical structures are also analysed by the proposed method and suggestions are made for achieving design economy as well as for extending research in various aspects of this field. The existing programme for elastic-plastic analysis of large frames is modified to allow for the effect of composite action of structural members, i.e. reinforced concrete floor slabs and the supporting steel beams. This modified programme is used to analyse some framed type structures with composite action as well as those which incorporate plates and shear walls. The results obtained are studied to ascertain the influence of composite action and other factors on the load carrying capacity of both bare frames and complete building structures. The theoretical failure load presented in this thesis does not predict the overall failure load of the structure nor does it predict the partial failure load of the shear walls and slabs but it merely predicts the partial failure load of a single frame and assumes that the loss of stiffess of such a frame renders the overall structure unusable. For most structures the analysis proposed in this thesis is likely to break down prematurely due to the failure of the slab and shear wall system and this factor must be taken into account in any future work on such structures. The experimental work reported in this thesis is acknowledged to be unsatisfactory as a verification of the limited theory proposed. In particular perspex was not found to be a suitable material for testing at high loads, micro-concrete may be more suitable.
Resumo:
Due to the failure of PRARE the orbital accuracy of ERS-1 is typically 10-15 cm radially as compared to 3-4cm for TOPEX/Poseidon. To gain the most from these simultaneous datasets it is necessary to improve the orbital accuracy of ERS-1 so that it is commensurate with that of TOPEX/Poseidon. For the integration of these two datasets it is also necessary to determine the altimeter and sea state biases for each of the satellites. Several models for the sea state bias of ERS-1 are considered by analysis of the ERS-1 single satellite crossovers. The model adopted consists of the sea state bias as a percentage of the significant wave height, namely 5.95%. The removal of ERS-1 orbit error and recovery of an ERS-1 - TOPEX/Poseidon relative bias are both achieved by analysis of dual crossover residuals. The gravitational field based radial orbit error is modelled by a finite Fourier expansion series with the dominant frequencies determined by analysis of the JGM-2 co-variance matrix. Periodic and secular terms to model the errors due to atmospheric density, solar radiation pressure and initial state vector mis-modelling are also solved for. Validation of the dataset unification consists of comparing the mean sea surface topographies and annual variabilities derived from both the corrected and uncorrected ERS-1 orbits with those derived from TOPEX/Poseidon. The global and regional geographically fixed/variable orbit errors are also analysed pre and post correction, and a significant reduction is noted. Finally the use of dual/single satellite crossovers and repeat pass data, for the calibration of ERS-2 with respect to ERS-1 and TOPEX/Poseidon is shown by calculating the ERS-1/2 sea state and relative biases.
Resumo:
Hydrocarbons are the most common form of energy used to date. The activities involving exploration and exploitation of large oil and gas fields are constantly in operation and have extended to such hostile environments as the North Sea. This enforces much greater demands on the materials which are used, and the need for enhancing the endurance of the existing ones which must continue parallel to the explorations. Due to their ease in fabrication, relatively high mechanical properties and low costs, steels are the most widely favoured material for the construction of offshore platforms. The most critical part of an offshore structure prone to failure are the welded nodal joints, particulary those which are used within the vicinity of the splash zones. This is an area of high complex stress concentrations, varying mechanical and metallurgical properties in addition to severe North Sea environmental conditions. The main are of this work has been concerned with the durability studies of this type of steel, based on the concept of the worst case analysis, consisting of combinations of welds of varying qualities, various degrees of stress concentrations and the environmental conditions of stress corrosion and hydrogen embrittlement. The experiments have been designed to reveal significance of defects as sites of crack initiation in the welded steels and the extent to which stress corrosion and hydrogen embrittlement will limit their durability. This has been done for various heat treatments and in some experiments deformation has been forced through the welded zone of the specimens to reveal the mechanical properties of the welds themselves to provide data for finite element simulations. A comparison of the results of these simulations with the actual deformation and fracture behaviour has been done to reveal the extent to which both mechanical and metallurgical factors control behaviour of the steels in the hostile environments of high stress, corrosion, and hydrogen embrittlement at their surface.
Resumo:
Ignorance of user factors can be seen as one of the nontechnical issues contributing to expert system failure. An expert advisory system is built for nonexpert users; the users' acceptance is a very important factor for its successful implementation. If an expert advisory system satisfactorily represents the expertise in the domain, there still remains the question: "Will the end-users use the system?" This paper aims to address users' issues by analysing their reactions towards an expert advisory system called ADGAME, developed to help its users make better decisions in playing a competitive business game. Two experiments with ADGAME have been carried out. The research results show that, when the use of the expert advisory system is optional, there is considerable reluctance to use it, particularly amongst the "worst" potential users. Users also doubt the potential benefits in terms of improved learning and confidence in decisions made. Strangely, the one positive expectation that users had, that the system would save them time, proved not to be the case in practice; ADGAME appears to improve the users' effectiveness rather than their efficiency. © 1995.
Resumo:
bCHP (Biomass combined heat and power) systems are highly efficient at smaller-scales when a significant proportion of the heat produced can be effectively utilised for hot water, space heating or industrial heating purposes. However, there are many barriers to project development and this has greatly inhibited deployment in the UK. Project viability is highly subjective to changes in policy, regulation, the finance market and the low cost fossil fuel incumbent. The paper reviews the barriers to small-scale bCHP project development in the UK along with a case study of a failed 1.5MWel bCHP scheme. The paper offers possible explanations for the project's failure and suggests adaptations to improve the project resilience. Analysis of the project's: capital structuring contract length and bankability; feedstock type and price uncertainty, and plant oversizing highlight the negative impact of the existing project barriers on project development. The research paper concludes with a discussion on the effects of these barriers on the case study project and this industry more generally. A greater understanding of the techno-economic effects of some barriers for small-scale bCHP schemes is demonstrated within this paper, along with some methods for improving the attractiveness and resilience of projects of this kind. © 2014 Elsevier Ltd.
Resumo:
We report an empirical analysis of long-range dependence in the returns of eight stock market indices, using the Rescaled Range Analysis (RRA) to estimate the Hurst exponent. Monte Carlo and bootstrap simulations are used to construct critical values for the null hypothesis of no long-range dependence. The issue of disentangling short-range and long-range dependence is examined. Pre-filtering by fitting a (short-range) autoregressive model eliminates part of the long-range dependence when the latter is present, while failure to pre-filter leaves open the possibility of conflating short-range and long-range dependence. There is a strong evidence of long-range dependence for the small central European Czech stock market index PX-glob, and a weaker evidence for two smaller western European stock market indices, MSE (Spain) and SWX (Switzerland). There is little or no evidence of long-range dependence for the other five indices, including those with the largest capitalizations among those considered, DJIA (US) and FTSE350 (UK). These results are generally consistent with prior expectations concerning the relative efficiency of the stock markets examined. © 2011 Elsevier Inc.
Resumo:
Risk management in healthcare represents a group of various complex actions, implemented to improve the quality of healthcare services and guarantee the patients safety. Risks cannot be eliminated, but it can be controlled with different risk assessment methods derived from industrial applications and among these the Failure Mode Effect and Criticality Analysis (FMECA) is a largely used methodology. The main purpose of this work is the analysis of failure modes of the Home Care (HC) service provided by local healthcare unit of Naples (ASL NA1) to focus attention on human and non human factors according to the organization framework selected by WHO. © Springer International Publishing Switzerland 2014.
Resumo:
Decision-making in product quality is an indispensable stage in product development, in order to reduce product development risk. Based on the identification of the deficiencies of quality function deployment (QFD) and failure modes and effects analysis (FMEA), a novel decision-making method is presented that draws upon a knowledge network of failure scenarios. An ontological expression of failure scenarios is presented together with a framework of failure knowledge network (FKN). According to the roles of quality characteristics (QCs) in failure processing, QCs are set into three categories namely perceptible QCs, restrictive QCs, and controllable QCs, which present the monitor targets, control targets and improvement targets respectively for quality management. A mathematical model and algorithms based on the analytic network process (ANP) is introduced for calculating the priority of QCs with respect to different development scenarios. A case study is provided according to the proposed decision-making procedure based on FKN. This methodology is applied in the propeller design process to solve the problem of prioritising QCs. This paper provides a practical approach for decision-making in product quality. Copyright © 2011 Inderscience Enterprises Ltd.