11 resultados para Univariate Analysis box-jenkins methodology
em Aston University Research Archive
Resumo:
Abstract (provisional): Background Failing a high-stakes assessment at medical school is a major event for those who go through the experience. Students who fail at medical school may be more likely to struggle in professional practice, therefore helping individuals overcome problems and respond appropriately is important. There is little understanding about what factors influence how individuals experience failure or make sense of the failing experience in remediation. The aim of this study was to investigate the complexity surrounding the failure experience from the student’s perspective using interpretative phenomenological analysis (IPA). Methods The accounts of 3 medical students who had failed final re-sit exams, were subjected to in-depth analysis using IPA methodology. IPA was used to analyse each transcript case-by-case allowing the researcher to make sense of the participant’s subjective world. The analysis process allowed the complexity surrounding the failure to be highlighted, alongside a narrative describing how students made sense of the experience. Results The circumstances surrounding students as they approached assessment and experienced failure at finals were a complex interaction between academic problems, personal problems (specifically finance and relationships), strained relationships with friends, family or faculty, and various mental health problems. Each student experienced multi-dimensional issues, each with their own individual combination of problems, but experienced remediation as a one-dimensional intervention with focus only on improving performance in written exams. What these students needed to be included was help with clinical skills, plus social and emotional support. Fear of termination of the their course was a barrier to open communication with staff. Conclusions These students’ experience of failure was complex. The experience of remediation is influenced by the way in which students make sense of failing. Generic remediation programmes may fail to meet the needs of students for whom personal, social and mental health issues are a part of the picture.
Resumo:
This study investigates plagiarism detection, with an application in forensic contexts. Two types of data were collected for the purposes of this study. Data in the form of written texts were obtained from two Portuguese Universities and from a Portuguese newspaper. These data are analysed linguistically to identify instances of verbatim, morpho-syntactical, lexical and discursive overlap. Data in the form of survey were obtained from two higher education institutions in Portugal, and another two in the United Kingdom. These data are analysed using a 2 by 2 between-groups Univariate Analysis of Variance (ANOVA), to reveal cross-cultural divergences in the perceptions of plagiarism. The study discusses the legal and social circumstances that may contribute to adopting a punitive approach to plagiarism, or, conversely, reject the punishment. The research adopts a critical approach to plagiarism detection. On the one hand, it describes the linguistic strategies adopted by plagiarists when borrowing from other sources, and, on the other hand, it discusses the relationship between these instances of plagiarism and the context in which they appear. A focus of this study is whether plagiarism involves an intention to deceive, and, in this case, whether forensic linguistic evidence can provide clues to this intentionality. It also evaluates current computational approaches to plagiarism detection, and identifies strategies that these systems fail to detect. Specifically, a method is proposed to translingual plagiarism. The findings indicate that, although cross-cultural aspects influence the different perceptions of plagiarism, a distinction needs to be made between intentional and unintentional plagiarism. The linguistic analysis demonstrates that linguistic elements can contribute to finding clues for the plagiarist’s intentionality. Furthermore, the findings show that translingual plagiarism can be detected by using the method proposed, and that plagiarism detection software can be improved using existing computer tools.
Resumo:
This paper reports an investigation of local sustainable production in Sweden aimed at exploring the factors contributing to survival and competitiveness of manufacturing. Eight companies were studied on two occasions 30 years apart; in 1980 and 2010. To provide a valid longitudinal, perspective a common format for data collection was used. As a framework for data collection and analysis the DRAMA methodology was employed (Bennett and Forrester, 1990). There are a number of results reported in detail concerning long term competitiveness and sustainability of manufacturing companies.
Resumo:
Mainstream gentrification research predominantly examines experiences and motivations of the middle-class gentrifier groups, while overlooking experiences of non-gentrifying groups including the impact of in situ local processes on gentrification itself. In this paper, I discuss gentrification, neighbourhood belonging and spatial distribution of class in Istanbul by examining patterns of belonging both of gentrifiers and non-gentrifying groups in historic neighbourhoods of the Golden Horn/Halic. I use multiple correspondence analysis (MCA), a methodology rarely used in gentrification research, to explore social and symbolic borders between these two groups. I show how gentrification leads to spatial clustering by creating exclusionary practices and eroding social cohesion, and illuminate divisions that are inscribed into the physical space of the neighbourhood.
Resumo:
Feature selection is important in medical field for many reasons. However, selecting important variables is a difficult task with the presence of censoring that is a unique feature in survival data analysis. This paper proposed an approach to deal with the censoring problem in endovascular aortic repair survival data through Bayesian networks. It was merged and embedded with a hybrid feature selection process that combines cox's univariate analysis with machine learning approaches such as ensemble artificial neural networks to select the most relevant predictive variables. The proposed algorithm was compared with common survival variable selection approaches such as; least absolute shrinkage and selection operator LASSO, and Akaike information criterion AIC methods. The results showed that it was capable of dealing with high censoring in the datasets. Moreover, ensemble classifiers increased the area under the roc curves of the two datasets collected from two centers located in United Kingdom separately. Furthermore, ensembles constructed with center 1 enhanced the concordance index of center 2 prediction compared to the model built with a single network. Although the size of the final reduced model using the neural networks and its ensembles is greater than other methods, the model outperformed the others in both concordance index and sensitivity for center 2 prediction. This indicates the reduced model is more powerful for cross center prediction.
Resumo:
Purpose: Considering the UK's limited capacity for waste disposal (particularly for hazardous/radiological waste) there is growing focus on waste avoidance and minimisation to lower the volumes of waste being sent to disposal. The hazardous nature of some waste can complicate its management and reduction. To address this problem there was a need for a decision making methodology to support managers in the nuclear industry as they identify ways to reduce the production of avoidable hazardous waste. The methodology we developed is called Waste And Sourcematter Analysis (WASAN). A methodology that begins the thought process at the pre-waste creation stage (i.e. Avoid). Design/methodology/ approach: The methodology analyses the source of waste, the production of waste inside the facility, the knock on effects from up/downstream facilities on waste production, and the down-selection of waste minimisation actions/options. WASAN has been applied to case studies with licencees and this paper reports on one such case study - the management of plastic bags in Enriched Uranium Residues Recovery Plant (EURRP) at Springfields (UK) where it was used to analyse the generation of radioactive plastic bag waste. Findings: Plastic bags are used in EURRP as a strategy to contain hazard. Double bagging of materials led to the proliferation of these bags as a waste. The paper reports on the philosophy behind WASAN, the application of the methodology to this problem, the results, and views from managers in EURRP. Originality/value: This paper presents WASAN as a novel methodology for analyzing the minimization of avoidable hazardous waste. This addresses an issue that is important to many industries e.g. where legislation enforces waste minimization, where waste disposal costs encourage waste avoidance, or where plant design can reduce waste. The paper forms part of the HSE Nuclear Installations Inspectorate's desire to work towards greater openness and transparency in its work and the development in its thinking.© Crown Copyright 2011.
Resumo:
Principal components analysis (PCA) has been described for over 50 years; however, it is rarely applied to the analysis of epidemiological data. In this study PCA was critically appraised in its ability to reveal relationships between pulsed-field gel electrophoresis (PFGE) profiles of methicillin- resistant Staphylococcus aureus (MRSA) in comparison to the more commonly employed cluster analysis and representation by dendrograms. The PFGE type following SmaI chromosomal digest was determined for 44 multidrug-resistant hospital-acquired methicillin-resistant S. aureus (MR-HA-MRSA) isolates, two multidrug-resistant community-acquired MRSA (MR-CA-MRSA), 50 hospital-acquired MRSA (HA-MRSA) isolates (from the University Hospital Birmingham, NHS Trust, UK) and 34 community-acquired MRSA (CA-MRSA) isolates (from general practitioners in Birmingham, UK). Strain relatedness was determined using Dice band-matching with UPGMA clustering and PCA. The results indicated that PCA revealed relationships between MRSA strains, which were more strongly correlated with known epidemiology, most likely because, unlike cluster analysis, PCA does not have the constraint of generating a hierarchic classification. In addition, PCA provides the opportunity for further analysis to identify key polymorphic bands within complex genotypic profiles, which is not always possible with dendrograms. Here we provide a detailed description of a PCA method for the analysis of PFGE profiles to complement further the epidemiological study of infectious disease. © 2005 Elsevier B.V. All rights reserved.
Resumo:
The rapid developments in computer technology have resulted in a widespread use of discrete event dynamic systems (DEDSs). This type of system is complex because it exhibits properties such as concurrency, conflict and non-determinism. It is therefore important to model and analyse such systems before implementation to ensure safe, deadlock free and optimal operation. This thesis investigates current modelling techniques and describes Petri net theory in more detail. It reviews top down, bottom up and hybrid Petri net synthesis techniques that are used to model large systems and introduces on object oriented methodology to enable modelling of larger and more complex systems. Designs obtained by this methodology are modular, easy to understand and allow re-use of designs. Control is the next logical step in the design process. This thesis reviews recent developments in control DEDSs and investigates the use of Petri nets in the design of supervisory controllers. The scheduling of exclusive use of resources is investigated and an efficient Petri net based scheduling algorithm is designed and a re-configurable controller is proposed. To enable the analysis and control of large and complex DEDSs, an object oriented C++ software tool kit was developed and used to implement a Petri net analysis tool, Petri net scheduling and control algorithms. Finally, the methodology was applied to two industrial DEDSs: a prototype can sorting machine developed by Eurotherm Controls Ltd., and a semiconductor testing plant belonging to SGS Thomson Microelectronics Ltd.
Resumo:
This research aims to examine the effectiveness of Soft Systems Methodology (SSM) to enable systemic change within local goverment and local NHS environments and to examine the role of the facilitator within this process. Checkland's Mode 2 variant of Soft Systems Methodology was applied on an experimental basis in two environments, Herefordshire Health Authority and Sand well Health Authority. The Herefordshire application used SSM in the design of an Integrated Care Pathway for stroke patients. In Sandwell, SSM was deployed to assist in the design of an Infonnation Management and Technology (IM&T) Strategy for the boundary-spanning Sandwell Partnership. Both of these environments were experiencing significant organisational change as the experiments unfurled. The explicit objectives of the research were: To examine the evolution and development of SSM and to contribute to its further development. To apply the Soft Systems Methodology to change processes within the NHS. To evaluate the potential role of SSM in this wider process of change. To assess the role of the researcher as a facilitator within this process. To develop a critical framework through which the impact of SSM on change might be understood and assessed. In developing these objectives, it became apparent that there was a gap in knowledge relating to SSM. This gap concerns the evaluation of the role of the approach in the change process. The case studies highlighted issues in stakeholder selection and management; the communicative assumptions in SSM; the ambiguous role of the facilitator; and the impact of highly politicised problem environments on the effectiveness of the methodology in the process of change. An augmented variant on SSM that integrates an appropriate (social constructivist) evaluation method is outlined, together with a series of hypotheses about the operationalisation of this proposed method.
Resumo:
Purpose: To determine whether curve-fitting analysis of the ranked segment distributions of topographic optic nerve head (ONH) parameters, derived using the Heidelberg Retina Tomograph (HRT), provide a more effective statistical descriptor to differentiate the normal from the glaucomatous ONH. Methods: The sample comprised of 22 normal control subjects (mean age 66.9 years; S.D. 7.8) and 22 glaucoma patients (mean age 72.1 years; S.D. 6.9) confirmed by reproducible visual field defects on the Humphrey Field Analyser. Three 10°-images of the ONH were obtained using the HRT. The mean topography image was determined and the HRT software was used to calculate the rim volume, rim area to disc area ratio, normalised rim area to disc area ratio and retinal nerve fibre cross-sectional area for each patient at 10°-sectoral intervals. The values were ranked in descending order, and each ranked-segment curve of ordered values was fitted using the least squares method. Results: There was no difference in disc area between the groups. The group mean cup-disc area ratio was significantly lower in the normal group (0.204 ± 0.16) compared with the glaucoma group (0.533 ± 0.083) (p < 0.001). The visual field indices, mean deviation and corrected pattern S.D., were significantly greater (p < 0.001) in the glaucoma group (-9.09 dB ± 3.3 and 7.91 ± 3.4, respectively) compared with the normal group (-0.15 dB ± 0.9 and 0.95 dB ± 0.8, respectively). Univariate linear regression provided the best overall fit to the ranked segment data. The equation parameters of the regression line manually applied to the normalised rim area-disc area and the rim area-disc area ratio data, correctly classified 100% of normal subjects and glaucoma patients. In this study sample, the regression analysis of ranked segment parameters method was more effective than conventional ranked segment analysis, in which glaucoma patients were misclassified in approximately 50% of cases. Further investigation in larger samples will enable the calculation of confidence intervals for normality. These reference standards will then need to be investigated for an independent sample to fully validate the technique. Conclusions: Using a curve-fitting approach to fit ranked segment curves retains information relating to the topographic nature of neural loss. Such methodology appears to overcome some of the deficiencies of conventional ranked segment analysis, and subject to validation in larger scale studies, may potentially be of clinical utility for detecting and monitoring glaucomatous damage. © 2007 The College of Optometrists.
Resumo:
A simplified (without phase modulator) scheme of a black box optical regenerator is proposed, where an appropriate nonlinear propagation is used to enhance regeneration. Applying semi-theoretical models the authors optimise and demonstrate feasibility of error-free long distance transmission at 40 Gbit/s.