911 resultados para Reliability data
Resumo:
This paper describes a framework that is being developed for the prediction and analysis of electronics power module reliability both for qualification testing and in-service lifetime prediction. Physics of failure (PoF) reliability methodology using multi-physics high-fidelity and reduced order computer modelling, as well as numerical optimization techniques, are integrated in a dedicated computer modelling environment to meet the needs of the power module designers and manufacturers as well as end-users for both design and maintenance purposes. An example of lifetime prediction for a power module solder interconnect structure is described. Another example is the lifetime prediction of a power module for a railway traction control application. Also in the paper a combined physics of failure and data trending prognostic methodology for the health monitoring of power modules is discussed.
Resumo:
Objective: To evaluate the psychometric performance of the Child Health Questionnaire (CHQ) in children with cerebral palsy (CP).
Method: 818 parents of children with CP, aged 8–12 from nine regions of Europe completed the CHQ (parent form 50 items). Functional abilities were classified using the five-level Gross Motor Function Classification Scheme (Levels I–III as ambulant; Level IV–V as nonambulant CP).
Results: Ceiling effects were observed for a number of subscales and summary scores across all Gross Motor Function Classification System levels, whilst floor effects occurred only in the physical functioning scale (Level V CP). Reliability was satisfactory overall. Confirmatory factor analysis (CFA) revealed a seven-factor structure for the total sample of children with CP but with different factor structures for ambulant and nonambulant children.
Conclusion: The CHQ has limited applicability in children with CP, although with judicious use of certain domains for ambulant and nonambulant children can provide useful and comparable data about child health status for descriptive purposes.
Resumo:
Electronprobe microanalysis is now widely adopted in tephra studies as a technique for determining the major element geochemistry of individual glass shards. Accurate geochemical characterization is crucial for enabling robust tephra-based correlations; such information may also be used to link the tephra to a specific source and often to a particular eruption. In this article, we present major element analyses for rhyolitic natural glass standards analysed on three different microprobes and the new JEOL FEGSEM 6500F microprobe at Queen’s University Belfast. Despite the scatter in some elements, good comparability is demonstrated among data yielded from this new system, the previous Belfast JEOL-733 Superprobe, the JEOL-8200 Superprobe (Copenhagen) and the existing long-established microprobe facility in Edinburgh. Importantly, our results show that major elements analysed using different microprobes and variable operating conditions allow two high-silica glasses to be discriminated accurately.
Testing the stability of the benefit transfer function for discrete choice contingent valuation data
Resumo:
This paper examines the stability of the benefit transfer function across 42 recreational forests in the British Isles. A working definition of reliable function transfer is Put forward, and a suitable statistical test is provided. A novel split sample method is used to test the sensitivity of the models' log-likelihood values to the removal of contingent valuation (CV) responses collected at individual forest sites, We find that a stable function improves Our measure of transfer reliability, but not by much. We conclude that, in empirical Studies on transferability, considerations of function stability are secondary to the availability and quality of site attribute data. Modellers' can study the advantages of transfer function stability vis-a-vis the value of additional information on recreation site attributes. (c) 2008 Elsevier GmbH. All rights reserved.
Resumo:
BACKGROUND: Inappropriate prescribing is a well-documented problem in older people. The new screening tools, STOPP (Screening Tool of Older Peoples' Prescriptions) and START (Screening Tool to Alert doctors to Right Treatment) have been formulated to identify potentially inappropriate medications (PIMs) and potential errors of omissions (PEOs) in older patients. Consistent, reliable application of STOPP and START is essential for the screening tools to be used effectively by pharmacists. OBJECTIVE: To determine the interrater reliability among a group of clinical pharmacists in applying the STOPP and START criteria to elderly patients' records. METHODS: Ten pharmacists (5 hospital pharmacists, 5 community pharmacists) were given 20 patient profiles containing details including the patients' age and sex, current medications, current diagnoses, relevant medical histories, biochemical data, and estimated glomerular filtration rate. Each pharmacist applied the STOPP and START criteria to each patient record. The PIMs and PEOs identified by each pharmacist were compared with those of 2 academic pharmacists who were highly familiar with the application of STOPP and START. An interrater reliability analysis using the k statistic (chance corrected measure of agreement) was performed to determine consistency between pharmacists. RESULTS: The median ? coefficients for hospital pharmacists and community pharmacists compared with the academic pharmacists for STOPP were 0.89 and 0.88, respectively, while those for START were 0.91 and 0.90, respectively. CONCLUSIONS: Interrater reliability of STOPP and START tools between pharmacists working in different sectors is good. Pharmacists working in both hospitals and in the community can use STOPP and START reliably during their everyday practice to identify PIMs and PEOs in older patients.
Resumo:
Data obtained with any research tool must be reproducible, a concept referred to as reliability. Three techniques are often used to evaluate reliability of tools using continuous data in aging research: intraclass correlation coefficients (ICC), Pearson correlations, and paired t tests. These are often construed as equivalent when applied to reliability. This is not correct, and may lead researchers to select instruments based on statistics that may not reflect actual reliability. The purpose of this paper is to compare the reliability estimates produced by these three techniques and determine the preferable technique. A hypothetical dataset was produced to evaluate the reliability estimates obtained with ICC, Pearson correlations, and paired t tests in three different situations. For each situation two sets of 20 observations were created to simulate an intrarater or inter-rater paradigm, based on 20 participants with two observations per participant. Situations were designed to demonstrate good agreement, systematic bias, or substantial random measurement error. In the situation demonstrating good agreement, all three techniques supported the conclusion that the data were reliable. In the situation demonstrating systematic bias, the ICC and t test suggested the data were not reliable, whereas the Pearson correlation suggested high reliability despite the systematic discrepancy. In the situation representing substantial random measurement error where low reliability was expected, the ICC and Pearson coefficient accurately illustrated this. The t test suggested the data were reliable. The ICC is the preferred technique to measure reliability. Although there are some limitations associated with the use of this technique, they can be overcome.
Resumo:
In this paper, we investigate the potential improvement in signal reliability for indoor off-body communications when using spatial diversity at the base station. In particular, we utilize two hypothetical indoor base stations operating at 5.8 GHz each featuring four antennas which are spaced at either half- or one-wavelength apart. Three on-body locations are considered along with four types of user movement. The cross-correlation between the received signal envelopes observed at each base station antenna element was calculated and found to be always less than 0.5. Selection, maximal ratio, and equal gain combining of the received signal has shown that the greatest improvement is obtained when the user is mobile, with a maximum diversity gain of 11.34 dB achievable when using a four branch receiver. To model the fading envelope obtained at the output of the virtual combiners, we use diversity specific, theoretical probability density functions for multi-branch receivers operating in Nakagami-m fading channels. It is shown that these equations provide an excellent fit to the measured channel data.
Resumo:
Future digital signal processing (DSP) systems must provide robustness on algorithm and application level to the presence of reliability issues that come along with corresponding implementations in modern semiconductor process technologies. In this paper, we address this issue by investigating the impact of unreliable memories on general DSP systems. In particular, we propose a novel framework to characterize the effects of unreliable memories, which enables us to devise novel methods to mitigate the associated performance loss. We propose to deploy specifically designed data representations, which have the capability of substantially improving the system reliability compared to that realized by conventional data representations used in digital integrated circuits, such as 2's-complement or sign-magnitude number formats. To demonstrate the efficacy of the proposed framework, we analyze the impact of unreliable memories on coded communication systems, and we show that the deployment of optimized data representations substantially improves the error-rate performance of such systems.
Resumo:
A reliable and valid instrument is needed to screen for depression in palliative patients. The interRAI Depression Rating Scale (DRS) is based on seven items in the interRAI Palliative Care instrument. This study is the first to explore the dimensionality, reliability and validity of the DRS in a palliative population. Palliative home care patients (n = 5,175) residing in Ontario (Canada) were assessed with the interRAI Palliative Care instrument. Exploratory factor analysis and Mokken scale analysis were used to identify candidate conceptual models and evaluate scale homogeneity/performance. Confirmatory factor analysis compared models using standard goodness-of-fit indices. Convergent and divergent validity were investigated by examining polychoric correlations between the DRS and other items. The “known groups” test determined if the DRS meaningfully distinguished among client subgroups. The non-hierarchical two factor model showed acceptable fit with the data, and ordinal alpha coefficients of 0.83 and 0.82 were observed for the two DRS subscales. Omega hierarchical (ωh) was 0.78 for the bifactor model, with the general factor explaining three quarters of the common variance. Despite the multidimensionality evident in the factor analyses, bifactor modelling and the Mokken homogeneity coefficient (0.34) suggest that the DRS is a coherent scale that captures important information on sub-constructs of depression (e.g., somatic symptoms). Higher correlations were seen between the DRS and mood and psychosocial well-being items, and lower correlations with functional status and demographic variables. The DRS distinguished in the expected manner for known risk factors (e.g., social support, pain). The results suggest that the DRS is primarily unidimensional and reliable for use in screening for depression in palliative care patients.
Resumo:
Due to the variability of wind power, it is imperative to accurately and timely forecast the wind generation to enhance the flexibility and reliability of the operation and control of real-time power. Special events such as ramps, spikes are hard to predict with traditional methods using solely recently measured data. In this paper, a new Gaussian Process model with hybrid training data taken from both the local time and historic dataset is proposed and applied to make short-term predictions from 10 minutes to one hour ahead. A key idea is that the similar pattern data in history are properly selected and embedded in Gaussian Process model to make predictions. The results of the proposed algorithms are compared to those of standard Gaussian Process model and the persistence model. It is shown that the proposed method not only reduces magnitude error but also phase error.
Resumo:
This paper presents a framework for a telecommunications interface which allows data from sensors embedded in Smart Grid applications to reliably archive data in an appropriate time-series database. The challenge in doing so is two-fold, firstly the various formats in which sensor data is represented, secondly the problems of telecoms reliability. A prototype of the authors' framework is detailed which showcases the main features of the framework in a case study featuring Phasor Measurement Units (PMU) as the application. Useful analysis of PMU data is achieved whenever data from multiple locations can be compared on a common time axis. The prototype developed highlights its reliability, extensibility and adoptability; features which are largely deferred from industry standards for data representation to proprietary database solutions. The open source framework presented provides link reliability for any type of Smart Grid sensor and is interoperable with existing proprietary database systems, and open database systems. The features of the authors' framework allow for researchers and developers to focus on the core of their real-time or historical analysis applications, rather than having to spend time interfacing with complex protocols.
Resumo:
This paper discusses a proposed new communications framework for phasor measurement units (PMU) optimized for use on wide area networks. Traditional PMU telecoms have been optimized for use in environments where bandwidth is restricted. The new method takes the reliability of the telecommunications medium into account and provides guaranteed delivery of data whilst optimizing for realtime delivery of the most current data. Other important aspects, such as security, are also considered.
Resumo:
This paper proposes a probabilistic principal component analysis (PCA) approach applied to islanding detection study based on wide area PMU data. The increasing probability of uncontrolled islanding operation, according to many power system operators, is one of the biggest concerns with a large penetration of distributed renewable generation. The traditional islanding detection methods, such as RoCoF and vector shift, are however extremely sensitive and may result in many unwanted trips. The proposed probabilistic PCA aims to improve islanding detection accuracy and reduce the risk of unwanted tripping based on PMU measurements, while addressing a practical issue on missing data. The reliability and accuracy of the proposed probabilistic PCA approach are demonstrated using real data recorded in the UK power system by the OpenPMU project. The results show that the proposed methods can detect islanding accurately, without being falsely triggered by generation trips, even in the presence of missing values.
Resumo:
The contemporary literature investigating the construct broadly known as time perspective is replete with methodological and conceptual concerns. These concerns focus on the reliability and factorial validity of measurement tools, and the sample-specific modification of scales. These issues continue to hamper the development of this potentially useful psychological construct. An emerging body of evidence has supported the six-factor structure of scores on the Adolescent Time Inventory-Time Attitudes Scale, as well as their reliability. The present study utilized data from the first wave of a longitudinal study in the United Kingdom to examine the reliability, validity, and cross-cultural invariance of the scale. Results showed that the hypothesized six-factor model provided the best fit for the data; all alpha and omega estimates were >. .70; scores on ATI-TA factors related meaningfully to self-efficacy scores; and the factor structure was invariant across both research sites. Results are discussed in the context of the extant temporal literature.
Resumo:
Acoustic predictions of the recently developed TRACEO ray model, which accounts for bottom shear properties, are benchmarked against tank experimental data from the EPEE-1 and EPEE-2 (Elastic Parabolic Equation Experiment) experiments. Both experiments are representative of signal propagation in a Pekeris-like shallow-water waveguide over a non-flat isotropic elastic bottom, where significant interaction of the signal with the bottom can be expected. The benchmarks show, in particular, that the ray model can be as accurate as a parabolic approximation model benchmarked in similar conditions. The results of benchmarking are important, on one side, as a preliminary experimental validation of the model and, on the other side, demonstrates the reliability of the ray approach for seismo-acoustic applications. (C) 2012 Acoustical Society of America. [http://dx.doi.org/10.1121/1.4734236]