98 resultados para Root cause analysis


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Mathematics has been perceived as the core area of learning in most educational systems around the world including Sri Lanka. Unfortunately, it is clearly visible that a majority of Sri Lankan students are failing in their basic mathematics when the recent grade five scholarship examination and ordinary level exam marks are analysed. According to Department of Examinations Sri Lanka , on average, over 88 percent of the students are failing in the grade 5 scholarship examinations where mathematics plays a huge role while about 50 percent of the students fail in there ordinary level mathematics examination. Poor or lack of basic mathematics skills has been identified as the root cause.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Change point estimation is recognized as an essential tool of root cause analyses within quality control programs as it enables clinical experts to search for potential causes of change in hospital outcomes more effectively. In this paper, we consider estimation of the time when a linear trend disturbance has occurred in survival time following an in-control clinical intervention in the presence of variable patient mix. To model the process and change point, a linear trend in the survival time of patients who underwent cardiac surgery is formulated using hierarchical models in a Bayesian framework. The data are right censored since the monitoring is conducted over a limited follow-up period. We capture the effect of risk factors prior to the surgery using a Weibull accelerated failure time regression model. We use Markov Chain Monte Carlo to obtain posterior distributions of the change point parameters including the location and the slope size of the trend and also corresponding probabilistic intervals and inferences. The performance of the Bayesian estimator is investigated through simulations and the result shows that precise estimates can be obtained when they are used in conjunction with the risk-adjusted survival time cumulative sum control chart (CUSUM) control charts for different trend scenarios. In comparison with the alternatives, step change point model and built-in CUSUM estimator, more accurate and precise estimates are obtained by the proposed Bayesian estimator over linear trends. These superiorities are enhanced when probability quantification, flexibility and generalizability of the Bayesian change point detection model are also considered.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A loss of function mutation in the TRESK K2P potassium channel (KCNK18), has recently been linked with typical familial migraine with aura. We now report the functional characterisation of additional TRESK channel missense variants identified in unrelated patients. Several variants either had no apparent functional effect, or they caused a reduction in channel activity. However, the C110R variant was found to cause a complete loss of TRESK function, yet is present in both sporadic migraine and control cohorts, and no variation in KCNK18 copy number was found. Thus despite the previously identified association between loss of TRESK channel activity and migraine in a large multigenerational pedigree, this finding indicates that a single non-functional TRESK variant is not alone sufficient to cause typical migraine and highlights the genetic complexity of this disorder. Migraine is a common, disabling neurological disorder with a genetic, environmental and in some cases hormonal component. It is characterized by attacks of severe, usually unilateral and throbbing headache, can be accompanied by nausea, vomiting and photophobia and is clinically divided into two main subtypes, migraine with aura (MA) when a migraine is accompanied by transient and reversible focal neurological symptoms and migraine without aura (MO)1. The multifactorial and clinical heterogeneity of the disorder have considerably hindered the identification of common migraine susceptibility genes and most of our current understanding comes from the studies of familial hemiplegic migraine (FHM), a rare monogenic autosomal dominant form of MA2. So far, the three susceptibility genes that have been convincingly identified in FHM families all encode ion channels or transporters: CACNA1A encoding the α1 subunit of the Cav2.1 calcium channel3, SCN1A encoding the Nav1.1 sodium channel4 and ATP1A2 encoding the α2 subunit of the Na+/K+ pump5. It is believed that mutations in these genes may lead to increased efflux of glutamate and potassium in the synapse and thereby cause migraine by rendering the brain more susceptible to cortical spreading depression (CSD)6 which is thought to play a role in initiating a migraine attack7,8. However, these genes have not to date been implicated in common forms of migraine9. Nevertheless, current opinion suggests that typical migraine, like FHM, is also disorder of neuronal excitability, ion homeostasis and neurotransmitter release10,11,12. Mutations in the SLC4A4 gene encoding the sodium-bicarbonate cotransporter NBCe1, have recently been implicated in several different forms of migraine13, and a variety of genes involved in glutamate homeostasis (PGCP, MTDH14 and LRP115) and a cation channel (TRPM8)15 have also recently been implicated in migraine via genome-wide association studies. Ion channels are therefore highly likely to play an important role in the pathogenesis of typical migraine. TRESK (KCNK18), is a member of the two-pore domain (K2P) family of potassium channels involved in the control of cellular electrical excitability16. Regulation of TRESK activity by the calcium-dependent phosphatase calcineurin17, as well as its expression in dorsal root ganglia (DRG)18 and trigeminal ganglia (TG)19,20 has led to a proposed role for this channel in a variety of pain pathways. In a recent study, a frameshift mutation (F139Wfsx24) in TRESK was identified in a large multigenerational pedigree where it co-segregated perfectly with typical MA and a significant genome-wide linkage LOD score of 3.0. Furthermore, functional analysis revealed that this mutation caused a complete loss of TRESK function and that the truncated subunit was also capable of down regulating wild-type channel function. This therefore highlighted KCNK18 as potentially important candidate gene and suggested that TRESK dysfunction might play a possible role in the pathogenesis of familial migraine with visual aura20. Additional screening for KCNK18 mutations in unrelated sporadic migraine and control cohorts also identified a number of other missense variants; R10G, A34V, C110R, S231P and A233V20. The A233V variant was found only in the control cohort, whilst A34V was identified in a single Australian migraine proband for which family samples were not available, but it was not detected in controls. By contrast, the R10G, C110R, and S231P variants were found in both migraineurs and controls in both cohorts. In this study, we have investigated the functional effect of these variants to further probe the potential association of TRESK dysfunction with typical migraine.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Engineers must have deep and accurate conceptual understanding of their field and Concept inventories (CIs) are one method of assessing conceptual understanding and providing formative feedback. Current CI tests use Multiple Choice Questions (MCQ) to identify misconceptions and have undergone reliability and validity testing to assess conceptual understanding. However, they do not readily provide the diagnostic information about students’ reasoning and therefore do not effectively point to specific actions that can be taken to improve student learning. We piloted the textual component of our diagnostic CI on electrical engineering students using items from the signals and systems CI. We then analysed the textual responses using automated lexical analysis software to test the effectiveness of these types of software and interviewed the students regarding their experience using the textual component. Results from the automated text analysis revealed that students held both incorrect and correct ideas for certain conceptual areas and provided indications of student misconceptions. User feedback also revealed that the inclusion of the textual component is helpful to students in assessing and reflecting on their own understanding.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Background Up-to-date evidence on levels and trends for age-sex-specific all-cause and cause-specific mortality is essential for the formation of global, regional, and national health policies. In the Global Burden of Disease Study 2013 (GBD 2013) we estimated yearly deaths for 188 countries between 1990, and 2013. We used the results to assess whether there is epidemiological convergence across countries. Methods We estimated age-sex-specific all-cause mortality using the GBD 2010 methods with some refinements to improve accuracy applied to an updated database of vital registration, survey, and census data. We generally estimated cause of death as in the GBD 2010. Key improvements included the addition of more recent vital registration data for 72 countries, an updated verbal autopsy literature review, two new and detailed data systems for China, and more detail for Mexico, UK, Turkey, and Russia. We improved statistical models for garbage code redistribution. We used six different modelling strategies across the 240 causes; cause of death ensemble modelling (CODEm) was the dominant strategy for causes with sufficient information. Trends for Alzheimer's disease and other dementias were informed by meta-regression of prevalence studies. For pathogen-specific causes of diarrhoea and lower respiratory infections we used a counterfactual approach. We computed two measures of convergence (inequality) across countries: the average relative difference across all pairs of countries (Gini coefficient) and the average absolute difference across countries. To summarise broad findings, we used multiple decrement life-tables to decompose probabilities of death from birth to exact age 15 years, from exact age 15 years to exact age 50 years, and from exact age 50 years to exact age 75 years, and life expectancy at birth into major causes. For all quantities reported, we computed 95% uncertainty intervals (UIs). We constrained cause-specific fractions within each age-sex-country-year group to sum to all-cause mortality based on draws from the uncertainty distributions. Findings Global life expectancy for both sexes increased from 65·3 years (UI 65·0–65·6) in 1990, to 71·5 years (UI 71·0–71·9) in 2013, while the number of deaths increased from 47·5 million (UI 46·8–48·2) to 54·9 million (UI 53·6–56·3) over the same interval. Global progress masked variation by age and sex: for children, average absolute differences between countries decreased but relative differences increased. For women aged 25–39 years and older than 75 years and for men aged 20–49 years and 65 years and older, both absolute and relative differences increased. Decomposition of global and regional life expectancy showed the prominent role of reductions in age-standardised death rates for cardiovascular diseases and cancers in high-income regions, and reductions in child deaths from diarrhoea, lower respiratory infections, and neonatal causes in low-income regions. HIV/AIDS reduced life expectancy in southern sub-Saharan Africa. For most communicable causes of death both numbers of deaths and age-standardised death rates fell whereas for most non-communicable causes, demographic shifts have increased numbers of deaths but decreased age-standardised death rates. Global deaths from injury increased by 10·7%, from 4·3 million deaths in 1990 to 4·8 million in 2013; but age-standardised rates declined over the same period by 21%. For some causes of more than 100 000 deaths per year in 2013, age-standardised death rates increased between 1990 and 2013, including HIV/AIDS, pancreatic cancer, atrial fibrillation and flutter, drug use disorders, diabetes, chronic kidney disease, and sickle-cell anaemias. Diarrhoeal diseases, lower respiratory infections, neonatal causes, and malaria are still in the top five causes of death in children younger than 5 years. The most important pathogens are rotavirus for diarrhoea and pneumococcus for lower respiratory infections. Country-specific probabilities of death over three phases of life were substantially varied between and within regions. Interpretation For most countries, the general pattern of reductions in age-sex specific mortality has been associated with a progressive shift towards a larger share of the remaining deaths caused by non-communicable disease and injuries. Assessing epidemiological convergence across countries depends on whether an absolute or relative measure of inequality is used. Nevertheless, age-standardised death rates for seven substantial causes are increasing, suggesting the potential for reversals in some countries. Important gaps exist in the empirical data for cause of death estimates for some countries; for example, no national data for India are available for the past decade.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A global framework for linear stability analyses of traffic models, based on the dispersion relation root locus method, is presented and is applied taking the example of a broad class of car-following (CF) models. This approach is able to analyse all aspects of the dynamics: long waves and short wave behaviours, phase velocities and stability features. The methodology is applied to investigate the potential benefits of connected vehicles, i.e. V2V communication enabling a vehicle to send and receive information to and from surrounding vehicles. We choose to focus on the design of the coefficients of cooperation which weights the information from downstream vehicles. The coefficients tuning is performed and different ways of implementing an efficient cooperative strategy are discussed. Hence, this paper brings design methods in order to obtain robust stability of traffic models, with application on cooperative CF models

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A combined data matrix consisting of high performance liquid chromatography–diode array detector (HPLC–DAD) and inductively coupled plasma-mass spectrometry (ICP-MS) measurements of samples from the plant roots of the Cortex moutan (CM), produced much better classification and prediction results in comparison with those obtained from either of the individual data sets. The HPLC peaks (organic components) of the CM samples, and the ICP-MS measurements (trace metal elements) were investigated with the use of principal component analysis (PCA) and the linear discriminant analysis (LDA) methods of data analysis; essentially, qualitative results suggested that discrimination of the CM samples from three different provinces was possible with the combined matrix producing best results. Another three methods, K-nearest neighbor (KNN), back-propagation artificial neural network (BP-ANN) and least squares support vector machines (LS-SVM) were applied for the classification and prediction of the samples. Again, the combined data matrix analyzed by the KNN method produced best results (100% correct; prediction set data). Additionally, multiple linear regression (MLR) was utilized to explore any relationship between the organic constituents and the metal elements of the CM samples; the extracted linear regression equations showed that the essential metals as well as some metallic pollutants were related to the organic compounds on the basis of their concentrations

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The most difficult operation in flood inundation mapping using optical flood images is to map the ‘wet’ areas where trees and houses are partly covered by water. This can be referred to as a typical problem of the presence of mixed pixels in the images. A number of automatic information extracting image classification algorithms have been developed over the years for flood mapping using optical remote sensing images, with most labelling a pixel as a particular class. However, they often fail to generate reliable flood inundation mapping because of the presence of mixed pixels in the images. To solve this problem, spectral unmixing methods have been developed. In this thesis, methods for selecting endmembers and the method to model the primary classes for unmixing, the two most important issues in spectral unmixing, are investigated. We conduct comparative studies of three typical spectral unmixing algorithms, Partial Constrained Linear Spectral unmixing, Multiple Endmember Selection Mixture Analysis and spectral unmixing using the Extended Support Vector Machine method. They are analysed and assessed by error analysis in flood mapping using MODIS, Landsat and World View-2 images. The Conventional Root Mean Square Error Assessment is applied to obtain errors for estimated fractions of each primary class. Moreover, a newly developed Fuzzy Error Matrix is used to obtain a clear picture of error distributions at the pixel level. This thesis shows that the Extended Support Vector Machine method is able to provide a more reliable estimation of fractional abundances and allows the use of a complete set of training samples to model a defined pure class. Furthermore, it can be applied to analysis of both pure and mixed pixels to provide integrated hard-soft classification results. Our research also identifies and explores a serious drawback in relation to endmember selections in current spectral unmixing methods which apply fixed sets of endmember classes or pure classes for mixture analysis of every pixel in an entire image. However, as it is not accurate to assume that every pixel in an image must contain all endmember classes, these methods usually cause an over-estimation of the fractional abundances in a particular pixel. In this thesis, a subset of adaptive endmembers in every pixel is derived using the proposed methods to form an endmember index matrix. The experimental results show that using the pixel-dependent endmembers in unmixing significantly improves performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: The systematic collection of high-quality mortality data is a prerequisite in designing relevant drowning prevention programmes. This descriptive study aimed to assess the quality (i.e., level of specificity) of cause-of-death reporting using ICD-10 drowning codes across 69 countries.---------- Methods: World Health Organization (WHO) mortality data were extracted for analysis. The proportion of unintentional drowning deaths coded as unspecified at the 3-character level (ICD-10 code W74) and for which the place of occurrence was unspecified at the 4th character (.9) were calculated for each country as indicators of the quality of cause-of-death reporting.---------- Results: In 32 of the 69 countries studied, the percentage of cases of unintentional drowning coded as unspecified at the 3-character level exceeded 50%, and in 19 countries, this percentage exceeded 80%; in contrast, the percentage was lower than 10% in only 10 countries. In 21 of the 56 countries that report 4-character codes, the percentage of unintentional drowning deaths for which the place of occurrence was unspecified at the 4th character exceeded 50%, and in 15 countries, exceeded 90%; in only 14 countries was this percentage lower than 10%.---------- Conclusion: Despite the introduction of more specific subcategories for drowning in the ICD-10, many countries were found to be failing to report sufficiently specific codes in drowning mortality data submitted to the WHO.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

World economies increasingly demand reliable and economical power supply and distribution. To achieve this aim the majority of power systems are becoming interconnected, with several power utilities supplying the one large network. One problem that occurs in a large interconnected power system is the regular occurrence of system disturbances which can result in the creation of intra-area oscillating modes. These modes can be regarded as the transient responses of the power system to excitation, which are generally characterised as decaying sinusoids. For a power system operating ideally these transient responses would ideally would have a “ring-down” time of 10-15 seconds. Sometimes equipment failures disturb the ideal operation of power systems and oscillating modes with ring-down times greater than 15 seconds arise. The larger settling times associated with such “poorly damped” modes cause substantial power flows between generation nodes, resulting in significant physical stresses on the power distribution system. If these modes are not just poorly damped but “negatively damped”, catastrophic failures of the system can occur. To ensure system stability and security of large power systems, the potentially dangerous oscillating modes generated from disturbances (such as equipment failure) must be quickly identified. The power utility must then apply appropriate damping control strategies. In power system monitoring there exist two facets of critical interest. The first is the estimation of modal parameters for a power system in normal, stable, operation. The second is the rapid detection of any substantial changes to this normal, stable operation (because of equipment breakdown for example). Most work to date has concentrated on the first of these two facets, i.e. on modal parameter estimation. Numerous modal parameter estimation techniques have been proposed and implemented, but all have limitations [1-13]. One of the key limitations of all existing parameter estimation methods is the fact that they require very long data records to provide accurate parameter estimates. This is a particularly significant problem after a sudden detrimental change in damping. One simply cannot afford to wait long enough to collect the large amounts of data required for existing parameter estimators. Motivated by this gap in the current body of knowledge and practice, the research reported in this thesis focuses heavily on rapid detection of changes (i.e. on the second facet mentioned above). This thesis reports on a number of new algorithms which can rapidly flag whether or not there has been a detrimental change to a stable operating system. It will be seen that the new algorithms enable sudden modal changes to be detected within quite short time frames (typically about 1 minute), using data from power systems in normal operation. The new methods reported in this thesis are summarised below. The Energy Based Detector (EBD): The rationale for this method is that the modal disturbance energy is greater for lightly damped modes than it is for heavily damped modes (because the latter decay more rapidly). Sudden changes in modal energy, then, imply sudden changes in modal damping. Because the method relies on data from power systems in normal operation, the modal disturbances are random. Accordingly, the disturbance energy is modelled as a random process (with the parameters of the model being determined from the power system under consideration). A threshold is then set based on the statistical model. The energy method is very simple to implement and is computationally efficient. It is, however, only able to determine whether or not a sudden modal deterioration has occurred; it cannot identify which mode has deteriorated. For this reason the method is particularly well suited to smaller interconnected power systems that involve only a single mode. Optimal Individual Mode Detector (OIMD): As discussed in the previous paragraph, the energy detector can only determine whether or not a change has occurred; it cannot flag which mode is responsible for the deterioration. The OIMD seeks to address this shortcoming. It uses optimal detection theory to test for sudden changes in individual modes. In practice, one can have an OIMD operating for all modes within a system, so that changes in any of the modes can be detected. Like the energy detector, the OIMD is based on a statistical model and a subsequently derived threshold test. The Kalman Innovation Detector (KID): This detector is an alternative to the OIMD. Unlike the OIMD, however, it does not explicitly monitor individual modes. Rather it relies on a key property of a Kalman filter, namely that the Kalman innovation (the difference between the estimated and observed outputs) is white as long as the Kalman filter model is valid. A Kalman filter model is set to represent a particular power system. If some event in the power system (such as equipment failure) causes a sudden change to the power system, the Kalman model will no longer be valid and the innovation will no longer be white. Furthermore, if there is a detrimental system change, the innovation spectrum will display strong peaks in the spectrum at frequency locations associated with changes. Hence the innovation spectrum can be monitored to both set-off an “alarm” when a change occurs and to identify which modal frequency has given rise to the change. The threshold for alarming is based on the simple Chi-Squared PDF for a normalised white noise spectrum [14, 15]. While the method can identify the mode which has deteriorated, it does not necessarily indicate whether there has been a frequency or damping change. The PPM discussed next can monitor frequency changes and so can provide some discrimination in this regard. The Polynomial Phase Method (PPM): In [16] the cubic phase (CP) function was introduced as a tool for revealing frequency related spectral changes. This thesis extends the cubic phase function to a generalised class of polynomial phase functions which can reveal frequency related spectral changes in power systems. A statistical analysis of the technique is performed. When applied to power system analysis, the PPM can provide knowledge of sudden shifts in frequency through both the new frequency estimate and the polynomial phase coefficient information. This knowledge can be then cross-referenced with other detection methods to provide improved detection benchmarks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bearing damage in modern inverter-fed AC drive systems is more common than in motors working with 50 or 60 Hz power supply. Fast switching transients and common mode voltage generated by a PWM inverter cause unwanted shaft voltage and resultant bearing currents. Parasitic capacitive coupling creates a path to discharge current in rotors and bearings. In order to analyze bearing current discharges and their effect on bearing damage under different conditions, calculation of the capacitive coupling between the outer and inner races is needed. During motor operation, the distances between the balls and races may change the capacitance values. Due to changing of the thickness and spatial distribution of the lubricating grease, this capacitance does not have a constant value and is known to change with speed and load. Thus, the resultant electric field between the races and balls varies with motor speed. The lubricating grease in the ball bearing cannot withstand high voltages and a short circuit through the lubricated grease can occur. At low speeds, because of gravity, balls and shaft voltage may shift down and the system (ball positions and shaft) will be asymmetric. In this study, two different asymmetric cases (asymmetric ball position, asymmetric shaft position) are analyzed and the results are compared with the symmetric case. The objective of this paper is to calculate the capacitive coupling and electric fields between the outer and inner races and the balls at different motor speeds in symmetrical and asymmetrical shaft and balls positions. The analysis is carried out using finite element simulations to determine the conditions which will increase the probability of high rates of bearing failure due to current discharges through the balls and races.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many randomised controlled trials (RCT) have been conducted using Piper methysticum (kava), however no qualitative research exploring the experience of taking kava during a clinical trial has previously been reported. ---------- Patients and methods: A qualitative research component (in the form of semi structured and open ended written questions) was incorporated into an RCT to explore the experiences of those participating in a clinical trial of kava. The written questions were provided to participants at weeks 2 and 3 (after randomisation, after each controlled phase). The researcher and participants were blinded as to whether they were taking kava or placebo. Two open ended questions were posed to elucidate their experiences from taking either kava or placebo. Thematic analysis was undertaken and researcher triangulation employed to ensure analytical rigour. Key themes after the kava phases were a reduction in anxiety and stress, and calming or relaxing mental effects. Other themes related to improvement in sleep and in somatic anxiety symptoms. ---------- Results: Kava use did not cause any serious adverse reactions although a few respondents reported nausea or other gastrointestinal side effects. This represents the first documented qualitative investigation of the experience of taking kava during a clinical trial. The primary themes involved anxiolytic and calming effects, with only a minor theme reflecting side effects. Our exploratory qualitative data was consistent with the significant quantitative results revealed in the study and provides additional support to suggest the trial results did not exclude any important positive or negative effects (at least as experienced by the trial participants).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aims--Telemonitoring (TM) and structured telephone support (STS) have the potential to deliver specialised management to more patients with chronic heart failure (CHF), but their efficacy is still to be proven. Objectives To review randomised controlled trials (RCTs) of TM or STS on all- cause mortality and all-cause and CHF-related hospitalisations in patients with CHF, as a non-invasive remote model of specialised disease-management intervention.--Methods and Results--Data sources:We searched 15 electronic databases and hand-searched bibliographies of relevant studies, systematic reviews, and meeting abstracts. Two reviewers independently extracted all data. Study eligibility and participants: We included any randomised controlled trials (RCT) comparing TM or STS to usual care of patients with CHF. Studies that included intensified management with additional home or clinic visits were excluded. Synthesis: Primary outcomes (mortality and hospitalisations) were analysed; secondary outcomes (cost, length of stay, quality of life) were tabulated.--Results: Thirty RCTs of STS and TM were identified (25 peer-reviewed publications (n=8,323) and five abstracts (n=1,482)). Of the 25 peer-reviewed studies, 11 evaluated TM (2,710 participants), 16 evaluated STS (5,613 participants) and two tested both interventions. TM reduced all-cause mortality (risk ratio (RR 0•66 [95% CI 0•54-0•81], p<0•0001) and STS showed similar trends (RR 0•88 [95% CI 0•76-1•01], p=0•08). Both TM (RR 0•79 [95% CI 0•67-0•94], p=0•008) and STS (RR 0•77 [95% CI 0•68-0•87], p<0•0001) reduced CHF-related hospitalisations. Both interventions improved quality of life, reduced costs, and were acceptable to patients. Improvements in prescribing, patient-knowledge and self-care, and functional class were observed.--Conclusion: TM and STS both appear effective interventions to improve outcomes in patients with CHF.