16 resultados para 2-sigma error

em Deakin Research Online - Australia


Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper focuses on the development of a hybrid phenomenological/inductive model to improve the current physical setup force model on a five stand industrial hot strip finishing mill. We approached the problem from two directions. In the first approach, the starting point was the output of the current setup force model. A feedforward multilayer perceptron (MLP) model was then used to estimate the true roll separating force using some other available variables as additional inputs to the model.

It was found that it is possible to significantly improve the estimation of a roll separating force from 5.3% error on average with the current setup model to 2.5% error on average with the hybrid model. The corresponding improvements for the first coils are from 7.5% with the current model to 3.8% with the hybrid model. This was achieved by inclusion, in addition to each stand's force from the current model, the contributions from setup forces from the other stands, as well as the contributions from a limited set of additional variables such as: a) aim width; b) setup thickness; c) setup temperature; and d) measured force from the previous coil.

In the second approach, we investigated the correlation between the large errors in the current model and input parameters of the model. The data set was split into two subsets, one representing the "normal" level of error between the current model and the measured force value, while the other set contained the coils with a "large" level of error. Additional set of data with changes in each coil's inputs from the previous coil's inputs was created to investigate the dependency on the previous coil.

The data sets were then analyzed using a C4.5 decision tree. The main findings were that the level of the speed vernier variable is highly correlated with the large errors in the current setup model. Specifically, a high positive speed vernier value often correlated to a large error. Secondly, it has been found that large changes to the model flow stress values between coils are correlated frequently with larger errors in the current setup force model.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This study aimed to describe the radiotherapy (RT) management and subsequent outcome in a cohort of patients with newly diagnosed glioma. Treatment details were obtained via a questionnaire completed by neurosurgeons, radiation and medical oncologists who treated patients diagnosed with glioma in Victoria during 1998–2000. Patients were identified by using the population-based Victorian Cancer Registry. Over the study period, data on 828 patients were obtained, of whom 612 (74%) were referred for consideration of RT. Radiotherapy was given to 496 patients as part of their initial treatment and to an additional 10 patients at the time of tumour recurrence or progression. The median age was 72 (16–85) years. Median overall survival (OS) was 9.2 (standard error (SE) 0.6) months for the entire group. Median OS was 29.1 (SE 8.0) and 7.4 (SE 0.4) months for all patients with histological confirmation of World Health Organization Grades III (anaplastic astrocytoma) and IV (glioblastoma multiforme) histology, respectively. A total of 47 different RT dose fractionation schedules were identified. This is the largest survey detailing management of glioma with RT, published to date. A marked variation in dose fractionation schemes was evident. While current best practice involves the use of chemotherapy in conjunction with RT for glioblastoma multiforme, advances in patient care may be undermined by this variation in the use of RT. Clinical trials relevant to an ageing population and evidence-based national clinical guidelines are required to define best practice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For most data stream applications, the volume of data is too huge to be stored in permanent devices or to be thoroughly scanned more than once. It is hence recognized that approximate answers are usually sufficient, where a good approximation obtained in a timely manner is often better than the exact answer that is delayed beyond the window of opportunity. Unfortunately, this is not the case for mining frequent patterns over data streams where algorithms capable of online processing data streams do not conform strictly to a precise error guarantee. Since the quality of approximate answers is as important as their timely delivery, it is necessary to design algorithms to meet both criteria at the same time. In this paper, we propose an algorithm that allows online processing of streaming data and yet guaranteeing the support error of frequent patterns strictly within a user-specified threshold. Our theoretical and experimental studies show that our algorithm is an effective and reliable method for finding frequent sets in data stream environments when both constraints need to be satisfied.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Previously, we indicated that we wanted to address the dialogue pertaining to education and teaching approaches to increase the use of specific types of evidence that exist to guide and inform practice, and began this by focusing on Clinical Practice Guidelines (CPG). This column builds on that knowledge to highlight how educators can use CPGs in practice and change situations whilst also raising awareness of the limitations of these tools in terms of their impact on practice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background : Human error occurs in every occupation. Medical errors may result in a near miss or an actual injury to a patient that has nothing to do with the underlying medical condition. Intensive care has one of the highest incidences of medical error and patient injury in any specialty medical area; thought to be related to the rapidly changing patient status and complex diagnoses and treatments.

Purpose :
The aims of this paper are to: (1) outline the definition, classifications and aetiology of medical error; (2) summarise key findings from the literature with a specific focus on errors arising from intensive care areas; and (3) conclude with an outline of approaches for analysing clinical information to determine adverse events and inform practice change in intensive care.

Data source : Database searches of articles and textbooks using keywords: medical error, patient safety, decision making and intensive care. Sociology and psychology literature cited therein.

Findings : Critically ill patients require numerous medications, multiple infusions and procedures. Although medical errors are often detected by clinicians at the bedside, organisational processes and systems may contribute to the problem. A systems approach is thought to provide greater insight into the contributory factors and potential solutions to avoid preventable adverse events.

Conclusion : It is recommended that a variety of clinical information and research techniques are used as a priority to prevent hospital acquired injuries and address patient safety concerns in intensive care.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Precision edge feature extraction is a very important step in vision, Researchers mainly use step edges to model an edge at subpixel level. In this paper we describe a new technique for two dimensional edge feature extraction to subpixel accuracy using a general edge model. Using six basic edge types to model edges, the edge parameters at subpixel level are extracted by fitting a model to the image signal using least-.squared error fitting technique.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the last two decades, Six Sigma approach has found its success in manufacturing sectors. The relevance of Six Sigma methodologies in service sector has been realised more recently. This paper investigates the application of Six Sigma approach to improve quality in electronic services (e-services) as more and more countries are adopting e-services as a means of providing services to their community and people through the Web. In particular, this paper presents a case study about the use of Six Sigma model to measure the customer satisfaction and quality levels achieved in e-services that were recently launched by public sector organisations in a developing country, such as Jordan. An empirical study consisting of 280 participating customers of Jordan‘s e-services is conducted and the problems are identified through the DMAIC phases of Six Sigma. The service quality levels are measured and analysed using six main criteria, namely, Website Design, Reliability, Responsiveness, Personalization, Information Quality, and System Quality. The overall result of the study indicating a 74% customer satisfaction with a Six Sigma level of 2.12 has enabled the Greater Amman Municipality to identify the usability issues associated with their e-services offered by public sector organisations and to take the leads from the results of the study to improve customer satisfaction. The aim of the paper is not only to implement Six Sigma as a measurement-based strategy for improving e-customer service quality in a newly launched e-service programme, but also to help widen its scope in investigating other service dimensions and perform comparative studies in other developing countries as future research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study proposes a novel non-parametric method for construction of prediction intervals (PIs) using interval type-2 Takagi-Sugeno-Kang fuzzy logic systems (IT2 TSK FLSs). The key idea in the proposed method is to treat the left and right end points of the type-reduced set as the lower and upper bounds of a PI. This allows us to construct PIs without making any special assumption about the data distribution. A new training algorithm is developed to satisfy conditions imposed by the associated confidence level on PIs. Proper adjustment of premise and consequent parameters of IT2 TSK FLSs is performed through the minimization of a PI-based objective function, rather than traditional error-based cost functions. This new cost function covers both validity and informativeness aspects of PIs. A metaheuristic method is applied for minimization of the non-linear non-differentiable cost function. Quantitative measures are applied for assessing the quality of PIs constructed using IT2 TSK FLSs. The demonstrated results for four benchmark case studies with homogenous and heterogeneous noise clearly show the proposed method is capable of generating high quality PIs useful for decision-making.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aims  To undertake further psychometric validation of the W-BQ28 to determine its suitability for use in adults with Type 2 diabetes in the UK using data from the AT.LANTUS follow-on study.

Methods  A total of 353 people with Type 2 diabetes participated in the AT.LANTUS Follow-on study, completing measures of well-being (W-BQ28), treatment satisfaction (DTSQ) and self-care (SCI-R). Confirmatory factor analyses was used to confirm the W-BQ28 structure and internal consistency reliability was assessed. Additional statistical tests were conducted to explore convergent, divergent and known-groups validity. Minimal important differences were calculated using distribution and anchor-based techniques.

Results  Structure of the W-BQ28 (seven four-item subscales plus 16-item generic and 12-item diabetes-specific scales) was confirmed (comparative fit index = 0.917, root mean square error of approximation (RMSEA) = 0.057). Internal consistency reliability was satisfactory (four-item subscales: alpha = 0.73–0.90; 12/16-item scales: α = 0.84–0.90). Convergent validity was supported by expected moderate to high correlations (rs = 0.35–0.67) between all W-BQ28 subscales (except Energy); divergent validity was supported by expected low to moderate correlations with treatment satisfaction (rs = −0.03–0.52) and self-care (rs = 0.02–0.22). Known-groups validity was supported with statistically significant differences by sex, age and HbA1c for expected subscales. Minimal important differences were established (range 0.14–2.90).

Conclusions  The W-BQ28 is a valid and reliable measure of generic and diabetes-specific well-being in Type 2 diabetes in the UK. Confirmation of the utility of W-BQ28 (including establishment of minimal important differences) means that its use is indicated in research and clinical practice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Building on a habitat mapping project completed in 2011, Deakin University was commissioned by Parks Victoria (PV) to apply the same methodology and ground-truth data to a second, more recent and higher resolution satellite image to create habitat maps for areas within the Corner Inlet and Nooramunga Marine and Coastal Park and Ramsar area. A ground-truth data set using in situ video and still photographs was used to develop and assess predictive models of benthic marine habitat distributions incorporating data from both RapidEye satellite imagery (corrected for atmospheric and water column effects by CSIRO) and LiDAR (Light Detection and Ranging) bathymetry. This report describes the results of the mapping effort as well as the methodology used to produce these habitat maps.

Overall accuracies of habitat classifications were good, with error rates similar to or better than the earlier classification (>73 % and kappa values > 0.58 for both study localities). The RapidEye classification failed to accurately detect Pyura and reef habitat classes at the Corner Inlet locality, possibly due to differences in spectral frequencies. For comparison, these categories were combined into a ‘non-seagrass’ category, similar to the one used at the Nooramunga locality in the original classification. Habitats predicted with highest accuracies differed from the earlier classification and were Posidonia in Corner Inlet (89%), and bare sediment (no-visible seagrass class) in Nooramunga (90%). In the Corner Inlet locality reef and Pyura habitat categories were not distinguishable in the repeated classification and so were combined with bare sediments. The majority of remaining classification errors were due to the misclassification of Zosteraceae as bare sediment and vice versa. Dominant habitats were the same as those from the 2011 classification with some differences in extent. For the Corner Inlet study locality the no-visible seagrass category remained the most extensive (9059 ha), followed by Posidonia (5,513 ha) and Zosteraceae (5,504 ha). In Nooramunga no-visible seagrass (6,294 ha), Zosteraceae (3,122 ha) and wet saltmarsh (1,562 ha) habitat classes were most dominant.

Change detection analyses between the 2009 and 2011 imagery were undertaken as part of this project, following the analyses presented in Monk et al. (2011) and incorporating error estimates from both classifications. These analyses indicated some shifts in classification between Posidonia and Zosteraceae as well as a general reduction in the area of Zosteraceae. Issues with classification of mixed beds were apparent, particularly in the main Posidonia bed at Nooramunga where a mosaic of Zosteraceae and Posidonia was seen that was not evident in the ALOS classification. Results of a reanalysis of the 1998-2009 change detection illustrating effects of binning of mixed beds is also provided as an appendix.

This work has been successful in providing baseline maps at an improved level of detail using a repeatable method meaning that any future changes in intertidal and shallow water marine habitats may be assessed in a consistent way with quantitative error assessments. In wider use, these maps should also allow improved conservation planning, advance fisheries and catchment management, and progress infrastructure planning to limit impacts on the Inlet environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper introduces a new type reduction (TR) algorithm for interval type-2 fuzzy logic systems (IT2 FLSs). Flexibility and adaptiveness are the key features of the proposed non-parametric algorithm. Lower and upper firing strengths of rules as well as their consequent coefficients are fed into a neural network (NN). NN output is a crisp value that corresponds to the defuzzified output of IT2 FLSs. The NN type reducer is trained through minimization of an error-based cost function with the purpose of improving modelling and forecasting performance of IT2 FLS models. Simulation results indicate that application of the proposed TR algorithm greatly enhances modelling and forecasting performance of IT2 FLS models. This benefit is achieved in no cost, as the computational requirement of the proposed algorithm is less than or at most equivalent to traditional TR algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The prevalence of visual impairment due to uncorrected refractive error has not been previously studied in Canada. A population-based study was conducted in Brantford, Ontario. The target population included all people 40 years of age and older. Study participants were selected using a randomized sampling strategy based on postal codes. Presenting distance and near visual acuities were measured with habitual spectacle correction, if any, in place. Best corrected visual acuities were determined for all participants who had a presenting distance visual acuity of less than 20/25. Population weighted prevalence of distance visual impairment (visual acuity <20/40 in the better eye) was 2.7% (n = 768, 95% confidence interval (CI) 1.8–4.0%) with 71.8% correctable by refraction. Population weighted prevalence of near visual impairment (visual acuity <20/40 with both eyes) was 2.2% (95% CI 1.4–3.6) with 69.1% correctable by refraction. Multivariable adjusted analysis showed that the odds of having distance visual impairment was independently associated with increased age (odds ratio, OR, 3.56, 95% CI 1.22–10.35; ≥65 years compared to those 39–64 years), and time since last eye examination (OR 4.93, 95% CI 1.19–20.32; ≥5 years compared to ≤2 years). The same factors appear to be associated with increased prevalence of near visual impairment but were not statistically significant. The majority of visual impairment found in Brantford was due to uncorrected refractive error. Factors that increased the prevalence of visual impairment were the same for distance and near visual acuity measurements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Uncorrected refractive error is the leading cause of visual impairment in the world. In Canada there are potentially 2 million people with visual impairment that could be corrected by simply wearing glasses or contact lenses. This number will double in the next 20 years due to Canada’s rapidly ageing population. Visual impairment can seriously affect quality of life. People with vision problems are more likely to fall, have a higher risk of fractures and other injuries, and they may be more likely to limit or stop driving. Visual impairment is also an independent risk factor for increased mortality in older persons.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Health professionals strive to deliver high-quality care in an inherently complex and error-prone environment. Underreporting of medical errors challenges attempts to understand causative factors and impedes efforts to implement preventive strategies. Audit with feedback is a knowledge translation strategy that has potential to modify health professionals' medical error reporting behaviour. However, evidence regarding which aspects of this complex, multi-dimensional intervention work best is lacking. The aims of the Safe Medication Audit Reporting Translation (SMART) study are to: 1. Implement and refine a reporting mechanism to feed audit data on medication errors back to nurses 2. Test the feedback reporting mechanism to determine its utility and effect 3. Identify characteristics of organisational context associated with error reporting in response to feedback METHODS/DESIGN: A quasi-experimental design, incorporating two pairs of matched wards at an acute care hospital, is used. Randomisation occurs at the ward level; one ward from each pair is randomised to receive the intervention. A key stakeholder reference group informs the design and delivery of the feedback intervention. Nurses on the intervention wards receive the feedback intervention (feedback of analysed audit data) on a quarterly basis for 12 months. Data for the feedback intervention come from medication documentation point-prevalence audits and weekly reports on routinely collected medication error data. Weekly reports on these data are obtained for the control wards. A controlled interrupted time series analysis is used to evaluate the effect of the feedback intervention. Self-report data are also collected from nurses on all four wards at baseline and at completion of the intervention to elicit their perceptions of the work context. Additionally, following each feedback cycle, nurses on the intervention wards are invited to complete a survey to evaluate the feedback and to establish their intentions to change their reporting behaviour. To assess sustainability of the intervention, at 6 months following completion of the intervention a point-prevalence chart audit is undertaken and a report of routinely collected medication errors for the previous 6 months is obtained. This intervention will have wider application for delivery of feedback to promote behaviour change for other areas of preventable error and adverse events.