931 resultados para Reliability assessment
Resumo:
In some circumstances ice floes may be modeled as beams. In general this modeling supposes constant thickness, which contradicts field observations. Action of currents, wind and the sequence of contacts, causes thickness to vary. Here this effect is taken into consideration on the modeling of the behavior of ice hitting inclined walls of offshore platforms. For this purpose, the boundary value problem is first equated. The set of equations so obtained is then transformed into a system of equations, that is then solved numerically. For this sake an implicit solution is developed, using a shooting method, with the accompanying Jacobian. In-plane coupling and the dependency of the boundary terms on deformation, make the problem non-linear and the development particular. Deformation and internal resultants are then computed for harmonic forms of beam profile. Forms of giving some additional generality to the problem are discussed.
Resumo:
BACKGROUND CONTEXT: The vertebral spine angle in the frontal plane is an important parameter in the assessment of scoliosis and may be obtained from panoramic X-ray images. Technological advances have allowed for an increased use of digital X-ray images in clinical practice. PURPOSE: In this context, the objective of this study is to assess the reliability of computer-assisted Cobb angle measurements taken from digital X-ray images. STUDY DESIGN/SETTING: Clinical investigation quantifying scoliotic deformity with Cobb method to evaluate the intra- and interobserver variability using manual and digital techniques. PATIENT SAMPLE: Forty-nine patients diagnosed with idiopathic scoliosis were chosen based on convenience, without predilection for gender, age, type, location, or magnitude of the curvature. OUTCOME MEASURES: Images were examined to evaluate Cobb angle variability, end plate selection, as well as intra- and interobserver errors. METHODS: Specific software was developed to digitally reproduce the Cobb method and calculate semiautomatically the degree of scoliotic deformity. During the study, three observers estimated the Cobb angle using both the digital and the traditional manual methods. RESULTS: The results showed that Cobb angle measurements may be reproduced in the computer as reliably as with the traditional manual method, in similar conditions to those found in clinical practice. CONCLUSIONS: The computer-assisted method (digital method) is clinically advantageous and appropriate to assess the scoliotic curvature in the frontal plane using Cobb method. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
This paper proposes a new methodology to reduce the probability of occurring states that cause load curtailment, while minimizing the involved costs to achieve that reduction. The methodology is supported by a hybrid method based on Fuzzy Set and Monte Carlo Simulation to catch both randomness and fuzziness of component outage parameters of transmission power system. The novelty of this research work consists in proposing two fundamentals approaches: 1) a global steady approach which deals with building the model of a faulted transmission power system aiming at minimizing the unavailability corresponding to each faulted component in transmission power system. This, results in the minimal global cost investment for the faulted components in a system states sample of the transmission network; 2) a dynamic iterative approach that checks individually the investment’s effect on the transmission network. A case study using the Reliability Test System (RTS) 1996 IEEE 24 Buses is presented to illustrate in detail the application of the proposed methodology.
Resumo:
The service quality of any sector has two major aspects namely technical and functional. Technical quality can be attained by maintaining technical specification as decided by the organization. Functional quality refers to the manner which service is delivered to customer which can be assessed by the customer feed backs. A field survey was conducted based on the management tool SERVQUAL, by designing 28 constructs under 7 dimensions of service quality. Stratified sampling techniques were used to get 336 valid responses and the gap scores of expectations and perceptions are analyzed using statistical techniques to identify the weakest dimension. To assess the technical aspects of availability six months live outage data of base transceiver were collected. The statistical and exploratory techniques were used to model the network performance. The failure patterns have been modeled in competing risk models and probability distribution of service outage and restorations were parameterized. Since the availability of network is a function of the reliability and maintainability of the network elements, any service provider who wishes to keep up their service level agreements on availability should be aware of the variability of these elements and its effects on interactions. The availability variations were studied by designing a discrete time event simulation model with probabilistic input parameters. The probabilistic distribution parameters arrived from live data analysis was used to design experiments to define the availability domain of the network under consideration. The availability domain can be used as a reference for planning and implementing maintenance activities. A new metric is proposed which incorporates a consistency index along with key service parameters that can be used to compare the performance of different service providers. The developed tool can be used for reliability analysis of mobile communication systems and assumes greater significance in the wake of mobile portability facility. It is also possible to have a relative measure of the effectiveness of different service providers.
Resumo:
This paper explains why the reliability assessment of energy limited systems requires more detailed models for primary generating resources availability, internal and external generating dispatch and customer demand than the ones commonly used for large power systems and presents a methodology based on the full sequential Montecarlo simulation technique with AC power flow for their long term reliability assessment which can properly include these detailed models. By means of a real example, it is shown how the simplified modeling traditionally used for large power systems leads to pessimistic predictions if it is applied to an energy limited system and also that it cannot predict all the load point adequacy problems. © 2006 IEEE.
Resumo:
BACKGROUND: High intercoder reliability (ICR) is required in qualitative content analysis for assuring quality when more than one coder is involved in data analysis. The literature is short of standardized procedures for ICR procedures in qualitative content analysis. OBJECTIVE: To illustrate how ICR assessment can be used to improve codings in qualitative content analysis. METHODS: Key steps of the procedure are presented, drawing on data from a qualitative study on patients' perspectives on low back pain. RESULTS: First, a coding scheme was developed using a comprehensive inductive and deductive approach. Second, 10 transcripts were coded independently by two researchers, and ICR was calculated. A resulting kappa value of .67 can be regarded as satisfactory to solid. Moreover, varying agreement rates helped to identify problems in the coding scheme. Low agreement rates, for instance, indicated that respective codes were defined too broadly and would need clarification. In a third step, the results of the analysis were used to improve the coding scheme, leading to consistent and high-quality results. DISCUSSION: The quantitative approach of ICR assessment is a viable instrument for quality assurance in qualitative content analysis. Kappa values and close inspection of agreement rates help to estimate and increase quality of codings. This approach facilitates good practice in coding and enhances credibility of analysis, especially when large samples are interviewed, different coders are involved, and quantitative results are presented.
Resumo:
"May 1997."
Resumo:
Peer reviewed
Resumo:
Peer reviewed
Resumo:
Peer reviewed
Resumo:
Objective: To evaluate the reliability of a peer evaluation instrument in a longitudinal team-based learning setting. Methods: Student pharmacists were instructed to evaluate the contributions of their peers. Evaluations were analyzed for the variance of the scores by identifying low, medium, and high scores. Agreement between performance ratings within each group of students was assessed via intra-class correlation coefficient (ICC). Results: We found little variation in the standard deviation (SD) based on the score means among the high, medium, and low scores within each group. The lack of variation in SD of results between groups suggests that the peer evaluation instrument produces precise results. The ICC showed strong concordance among raters. Conclusions: Findings suggest that our student peer evaluation instrument provides a reliable method for peer assessment in team-based learning settings.
Resumo:
There is an increasing concern to reduce the cost and overheads during the development of reliable systems. Selective protection of most critical parts of the systems represents a viable solution to obtain a high level of reliability at a fraction of the cost. In particular to design a selective fault mitigation strategy for processor-based systems, it is mandatory to identify and prioritize the most vulnerable registers in the register file as best candidates to be protected (hardened). This paper presents an application-based metric to estimate the criticality of each register from the microprocessor register file in microprocessor-based systems. The proposed metric relies on the combination of three different criteria based on common features of executed applications. The applicability and accuracy of our proposal have been evaluated in a set of applications running in different microprocessors. Results show a significant improvement in accuracy compared to previous approaches and regardless of the underlying architecture.
Resumo:
to assess the construct validity and reliability of the Pediatric Patient Classification Instrument. correlation study developed at a teaching hospital. The classification involved 227 patients, using the pediatric patient classification instrument. The construct validity was assessed through the factor analysis approach and reliability through internal consistency. the Exploratory Factor Analysis identified three constructs with 67.5% of variance explanation and, in the reliability assessment, the following Cronbach's alpha coefficients were found: 0.92 for the instrument as a whole; 0.88 for the Patient domain; 0.81 for the Family domain; 0.44 for the Therapeutic procedures domain. the instrument evidenced its construct validity and reliability, and these analyses indicate the feasibility of the instrument. The validation of the Pediatric Patient Classification Instrument still represents a challenge, due to its relevance for a closer look at pediatric nursing care and management. Further research should be considered to explore its dimensionality and content validity.
Resumo:
INTRODUCTION: Debates about the quality of medical education have become more evident in the recent past, and as a result several different assessment methods have been refined for that purpose. The use of questionnaires filled out by medical students to assess the quality of lectures is one of the most common methods employed in our milieu. However, the reliability of this investigation method has not yet been systematically tested. The authors present the reliability of a specific form applied to the fourth grade medical students during the clinical psychiatry course. METHOD: Eighty-one fourth grade medical students were instructed to complete a form immediately after each clinical psychiatry lecture. Thirty-four students (42%) failed to turn in the forms after the final lecture. These students were given an identical form to assess the lectures in a retrospective fashion. The grades given by both groups of students for each performed lecture and the number of students who have graded an unperformed lecture were compared. Statistical significance for both groups was determined by means of the chi-square test (p< 0.05). RESULTS: Eighteen out of the 34 students who filled out the forms retrospectively (53%) rated the unperformed lecture, whereas only 5 out of the 47 students who filled out the forms during the course (11%) did so. This is statistically significant (p< 0.05). There was no statistical difference for the grades given to the lectures that were actually performed. DISCUSSION: The authors concluded the low reliability rate of the retrospective evaluation warrant a continuous assessment method during the course.