488 resultados para Reliability level


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a new approach for network upgrading to improve the penetration level of Small Scale Generators in residential feeders. In this paper, it is proposed that a common DC link can be added to LV network to alleviate the negative impact of increased export power on AC lines, allowing customers to inject their surplus power with no restrictions to the common DC link. In addition, it is shown that the proposed approach can be a pathway from current AC network to future DC network.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a reliability assessment of a substation, part of the Queensland transmission network in Australia. As part of a maintenance considerations, this study utilises the substation reliability assessment package STAREL to quantitatively compare the reliability improvement achieved by two circuit breaker reinforcement alternatives for Swanbank circuit breaker replacement or refurbishment. Substation reliability is interpreted on the basis of outage frequency and outage duration indices for each individual transmission line terminated in Swanbank 'B' substation. By considering the reliability indices in this paper with the cost associated conducted by POWERLINK Queensland, a Swanbank 'B' reinforcement alternative can be selected that optimises both transmission line security and the costs incurred in achieving it.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Reliability is an integral component of modern power system design, planning and management. This paper uses the Markov approach to substation reliability evaluation using dedicated reliability software. This technique was applied to yield reliability indices for an existing and important substation in the POWERLINK QUEENSLAND 275 kV transmission network. Reliability indices were also determined for several reinforcement alternatives for this substation with the aim of improving substation reliability. The economic feasibility of achieving higher levels of reliability was also taken into account.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: A range of health outcomes at a population level are related to differences in levels of social disadvantage. Understanding the impact of any such differences in palliative care is important. The aim of this study was to assess, by level of socio-economic disadvantage, referral patterns to specialist palliative care and proximity to inpatient services. Methods: All inpatient and community palliative care services nationally were geocoded (using postcode) to one nationally standardised measure of socio-economic deprivation – Socio-Economic Index for Areas (SEIFA; 2006 census data). Referral to palliative care services and characteristics of referrals were described through data collected routinely at clinical encounters. Inpatient location was measured from each person’s home postcode, and stratified by socio-economic disadvantage. Results: This study covered July – December 2009 with data from 10,064 patients. People from the highest SEIFA group (least disadvantaged) were significantly less likely to be referred to a specialist palliative care service, likely to be referred closer to death and to have more episodes of inpatient care for longer time. Physical proximity of a person’s home to inpatient care showed a gradient with increasing distance by decreasing levels of socio-economic advantage. Conclusion: These data suggest that a simple relationship of low socioeconomic status and poor access to a referral-based specialty such as palliative care does not exist. Different patterns of referral and hence different patterns of care emerge.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The ability to estimate the asset reliability and the probability of failure is critical to reducing maintenance costs, operation downtime, and safety hazards. Predicting the survival time and the probability of failure in future time is an indispensable requirement in prognostics and asset health management. In traditional reliability models, the lifetime of an asset is estimated using failure event data, alone; however, statistically sufficient failure event data are often difficult to attain in real-life situations due to poor data management, effective preventive maintenance, and the small population of identical assets in use. Condition indicators and operating environment indicators are two types of covariate data that are normally obtained in addition to failure event and suspended data. These data contain significant information about the state and health of an asset. Condition indicators reflect the level of degradation of assets while operating environment indicators accelerate or decelerate the lifetime of assets. When these data are available, an alternative approach to the traditional reliability analysis is the modelling of condition indicators and operating environment indicators and their failure-generating mechanisms using a covariate-based hazard model. The literature review indicates that a number of covariate-based hazard models have been developed. All of these existing covariate-based hazard models were developed based on the principle theory of the Proportional Hazard Model (PHM). However, most of these models have not attracted much attention in the field of machinery prognostics. Moreover, due to the prominence of PHM, attempts at developing alternative models, to some extent, have been stifled, although a number of alternative models to PHM have been suggested. The existing covariate-based hazard models neglect to fully utilise three types of asset health information (including failure event data (i.e. observed and/or suspended), condition data, and operating environment data) into a model to have more effective hazard and reliability predictions. In addition, current research shows that condition indicators and operating environment indicators have different characteristics and they are non-homogeneous covariate data. Condition indicators act as response variables (or dependent variables) whereas operating environment indicators act as explanatory variables (or independent variables). However, these non-homogenous covariate data were modelled in the same way for hazard prediction in the existing covariate-based hazard models. The related and yet more imperative question is how both of these indicators should be effectively modelled and integrated into the covariate-based hazard model. This work presents a new approach for addressing the aforementioned challenges. The new covariate-based hazard model, which termed as Explicit Hazard Model (EHM), explicitly and effectively incorporates all three available asset health information into the modelling of hazard and reliability predictions and also drives the relationship between actual asset health and condition measurements as well as operating environment measurements. The theoretical development of the model and its parameter estimation method are demonstrated in this work. EHM assumes that the baseline hazard is a function of the both time and condition indicators. Condition indicators provide information about the health condition of an asset; therefore they update and reform the baseline hazard of EHM according to the health state of asset at given time t. Some examples of condition indicators are the vibration of rotating machinery, the level of metal particles in engine oil analysis, and wear in a component, to name but a few. Operating environment indicators in this model are failure accelerators and/or decelerators that are included in the covariate function of EHM and may increase or decrease the value of the hazard from the baseline hazard. These indicators caused by the environment in which an asset operates, and that have not been explicitly identified by the condition indicators (e.g. Loads, environmental stresses, and other dynamically changing environment factors). While the effects of operating environment indicators could be nought in EHM; condition indicators could emerge because these indicators are observed and measured as long as an asset is operational and survived. EHM has several advantages over the existing covariate-based hazard models. One is this model utilises three different sources of asset health data (i.e. population characteristics, condition indicators, and operating environment indicators) to effectively predict hazard and reliability. Another is that EHM explicitly investigates the relationship between condition and operating environment indicators associated with the hazard of an asset. Furthermore, the proportionality assumption, which most of the covariate-based hazard models suffer from it, does not exist in EHM. According to the sample size of failure/suspension times, EHM is extended into two forms: semi-parametric and non-parametric. The semi-parametric EHM assumes a specified lifetime distribution (i.e. Weibull distribution) in the form of the baseline hazard. However, for more industry applications, due to sparse failure event data of assets, the analysis of such data often involves complex distributional shapes about which little is known. Therefore, to avoid the restrictive assumption of the semi-parametric EHM about assuming a specified lifetime distribution for failure event histories, the non-parametric EHM, which is a distribution free model, has been developed. The development of EHM into two forms is another merit of the model. A case study was conducted using laboratory experiment data to validate the practicality of the both semi-parametric and non-parametric EHMs. The performance of the newly-developed models is appraised using the comparison amongst the estimated results of these models and the other existing covariate-based hazard models. The comparison results demonstrated that both the semi-parametric and non-parametric EHMs outperform the existing covariate-based hazard models. Future research directions regarding to the new parameter estimation method in the case of time-dependent effects of covariates and missing data, application of EHM in both repairable and non-repairable systems using field data, and a decision support model in which linked to the estimated reliability results, are also identified.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

NeSSi (network security simulator) is a novel network simulation tool which incorporates a variety of features relevant to network security distinguishing it from general-purpose network simulators. Its capabilities such as profile-based automated attack generation, traffic analysis and support for detection algorithm plug-ins allow it to be used for security research and evaluation purposes. NeSSi has been successfully used for testing intrusion detection algorithms, conducting network security analysis and developing overlay security frameworks. NeSSi is built upon the agent framework JIAC, resulting in a distributed and extensible architecture. In this paper, we provide an overview of the NeSSi architecture as well as its distinguishing features and briefly demonstrate its application to current security research projects.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Climate change and land use pressures are making environmental monitoring increasingly important. As environmental health is degrading at an alarming rate, ecologists have tried to tackle the problem by monitoring the composition and condition of environment. However, traditional monitoring methods using experts are manual and expensive; to address this issue government organisations designed a simpler and faster surrogate-based assessment technique for consultants, landholders and ordinary citizens. However, it remains complex, subjective and error prone. This makes collected data difficult to interpret and compare. In this paper we describe a work-in-progress mobile application designed to address these shortcomings through the use of augmented reality and multimedia smartphone technology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Educators are faced with many challenging questions in designing an effective curriculum. What prerequisite knowledge do students have before commencing a new subject? At what level of mastery? What is the spread of capabilities between bare-passing students vs. the top performing group? How does the intended learning specification compare to student performance at the end of a subject? In this paper we present a conceptual model that helps in answering some of these questions. It has the following main capabilities: capturing the learning specification in terms of syllabus topics and outcomes; capturing mastery levels to model progression; capturing the minimal vs. aspirational learning design; capturing confidence and reliability metrics for each of these mappings; and finally, comparing and reflecting on the learning specification against actual student performance. We present a web-based implementation of the model, and validate it by mapping the final exams from four programming subjects against the ACM/IEEE CS2013 topics and outcomes, using Bloom's Taxonomy as the mastery scale. We then import the itemised exam grades from 632 students across the four subjects and compare the demonstrated student performance against the expected learning for each of these. Key contributions of this work are the validated conceptual model for capturing and comparing expected learning vs. demonstrated performance, and a web-based implementation of this model, which is made freely available online as a community resource.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent research has proposed Neo-Piagetian theory as a useful way of describing the cognitive development of novice programmers. Neo-Piagetian theory may also be a useful way to classify materials used in learning and assessment. If Neo-Piagetian coding of learning resources is to be useful then it is important that practitioners can learn it and apply it reliably. We describe the design of an interactive web-based tutorial for Neo-Piagetian categorization of assessment tasks. We also report an evaluation of the tutorial's effectiveness, in which twenty computer science educators participated. The average classification accuracy of the participants on each of the three Neo-Piagetian stages were 85%, 71% and 78%. Participants also rated their agreement with the expert classifications, and indicated high agreement (91%, 83% and 91% across the three Neo-Piagetian stages). Self-rated confidence in applying Neo-Piagetian theory to classifying programming questions before and after the tutorial were 29% and 75% respectively. Our key contribution is the demonstration of the feasibility of the Neo-Piagetian approach to classifying assessment materials, by demonstrating that it is learnable and can be applied reliably by a group of educators. Our tutorial is freely available as a community resource.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objectives: To develop and test preliminary reliability and validity of a Self-Efficacy Questionnaire for Chinese Family Caregivers (SEQCFC). Methods: A cross-sectional survey of 196 family caregivers (CGs) of people with dementia (CGs) was conducted to determine the factor structure of a SEQCFC of people with dementia. Following factor analyses, preliminary testing was performed, including internal consistency, 4-week test retest reliability, and construct and convergent validity. Results: Factor analyses with direct oblimin rotation were performed. Eight items were removed and five subscales(selfefficacy for gathering information about treatment, symptoms and health care; obtaining support; responding to behaviour disturbances; managing household, personal and medical care; and managing distress associated with caregiving) were identified. The Cronbach’s alpha coefficients for the whole scale and for each subscale were all over 0.80. The 4-week testretest reliabilities for the whole scale and for each subscale ranged from 0.64 to 0.85. The convergent validity was acceptable. Conclusions: Evidence for the preliminary testing of the SEQCFC was encouraging. A future follow-up study using confirmatory factor analysis with a new sample from different recruitment centres in Shanghai will be conducted. Future psychometric property testings of the questionnaire will be required for CGs from other regions of mainland China.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective: To examine the association between individual- and neighborhood-level disadvantage and self-reported arthritis. Methods: We used data from a population-based cross-sectional study conducted in 2007 among 10,757 men and women ages 40–65 years, selected from 200 neighborhoods in Brisbane, Queensland, Australia using a stratified 2-stage cluster design. Data were collected using a mail survey (68.5% response). Neighborhood disadvantage was measured using a census-based composite index, and individual disadvantage was measured using self-reported education, household income, and occupation. Arthritis was indicated by self-report. Data were analyzed using multilevel modeling. Results: The overall rate of self-reported arthritis was 23% (95% confidence interval [95% CI] 22–24). After adjustment for sociodemographic factors, arthritis prevalence was greatest for women (odds ratio [OR] 1.5, 95% CI 1.4–1.7) and in those ages 60–65 years (OR 4.4, 95% CI 3.7–5.2), those with a diploma/associate diploma (OR 1.3, 95% CI 1.1–1.6), those who were permanently unable to work (OR 4.0, 95% CI 3.1–5.3), and those with a household income <$25,999 (OR 2.1, 95% CI 1.7–2.6). Independent of individual-level factors, residents of the most disadvantaged neighborhoods were 42% (OR 1.4, 95% CI 1.2–1.7) more likely than those in the least disadvantaged neighborhoods to self-report arthritis. Cross-level interactions between neighborhood disadvantage and education, occupation, and household income were not significant. Conclusion: Arthritis prevalence is greater in more socially disadvantaged neighborhoods. These are the first multilevel data to examine the relationship between individual- and neighborhood-level disadvantage upon arthritis and have important implications for policy, health promotion, and other intervention strategies designed to reduce the rates of arthritis, indicating that intervention efforts may need to focus on both people and places.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A major priority for cancer control agencies is to reduce geographical inequalities in cancer outcomes. While the poorer breast cancer survival among socioeconomically disadvantaged women is well established, few studies have looked at the independent contribution that area- and individual-level factors make to breast cancer survival. Here we examine relationships between geographic remoteness, area-level socioeconomic disadvantage and breast cancer survival after adjustment for patients’ socio- demographic characteristics and stage at diagnosis. Multilevel logistic regression and Markov chain Monte Carlo simulation were used to analyze 18 568 breast cancer cases extracted from the Queensland Cancer Registry for women aged 30 to 70 years diagnosed between 1997 and 2006 from 478 Statistical Local Areas in Queensland, Australia. Independent of individual-level factors, area-level disadvantage was associated with breast-cancer survival (p=0.032). Compared to women in the least disadvantaged quintile (Quintile 5), women diagnosed while resident in one of the remaining four quintiles had significantly worse survival (OR 1.23, 1.27, 1.30, 1.37 for Quintiles 4, 3, 2 and 1 respectively).) Geographic remoteness was not related to lower survival after multivariable adjustment. There was no evidence that the impact of area-level disadvantage varied by geographic remoteness. At the individual level, Indigenous status, blue collar occupations and advanced disease were important predictors of poorer survival. A woman’s survival after a diagnosis of breast cancer depends on the socio-economic characteristics of the area where she lives, independently of her individual-level characteristics. It is crucial that the underlying reasons for these inequalities be identified to appropriately target policies, resources and effective intervention strategies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The starting point for this presentation is that applicants provide a large surplus of information when submitting a NHMRC Project Grant proposal for funding. This is costly in their time, attracts high administration costs, makes the task appear daunting for peer reviewers and may reduce the quality of the peer review leading to less than perfect reliability in decision making. We are currently experimenting with alternate models to see whether similar reliability in funding outcomes are achieved at less cost. We will compare traditional NHMRC Grant Review Panels (GRPs) with panels that use less information and journal style panels. By way of background to this experimental work, we will show some results on current levels of reliability for GRPs, the costs incurred by all who participate in Project Grant selection, and the level of reliability acceptable to researchers. By experimenting in this way and building an evidence base for how research funding should be allocated, the NHMRC is showing international leadership in this important field.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Low-cost level crossings are often criticized as being unsafe. Does a SIL (safety integrity level) rating make the railway crossing any safer? This paper discusses how a supporting argument might be made for low-cost level crossing warning devices with lower levels of safety integrity and issues such as risk tolerability and derivation of tolerable hazard rates for system-level hazards. As part of the design of such systems according to fail-safe principles, the paper considers the assumptions around the pre-defined safe states of existing warning devices and how human factors issues around such states can give rise to additional hazards.