761 resultados para reliability algorithms
Resumo:
The objective of the present study was to determine the reliability of the Brazilian version of the Composite International Diagnostic Interview 2.1 (CIDI 2.1) in clinical psychiatry. The CIDI 2.1 was translated into Portuguese using WHO guidelines and reliability was studied using the inter-rater reliability method. The study sample consisted of 186 subjects from psychiatric hospitals and clinics, primary care centers and community services. The interviewers consisted of a group of 13 lay and three non-lay interviewers submitted to the CIDI training. The average interview time was 2 h and 30 min. General reliability ranged from kappa 0.50 to 1. For lifetime diagnoses the reliability ranged from kappa 0.77 (Bipolar Affective Disorder) to 1 (Substance-Related Disorder, Alcohol-Related Disorder, Eating Disorders). Previous year reliability ranged from kappa 0.66 (Obsessive-Compulsive Disorder) to 1 (Dissociative Disorders, Maniac Disorders, Eating Disorders). The poorest reliability rate was found for Mild Depressive Episode (kappa = 0.50) during the previous year. Training proved to be a fundamental factor for maintaining good reliability. Technical knowledge of the questionnaire compensated for the lack of psychiatric knowledge of the lay personnel. Inter-rater reliability was good to excellent for persons in psychiatric practice.
Resumo:
The objective of the present study was to translate, adapt and validate a Brazilian Portuguese version of the Disabilities of the Arm, Shoulder and Hand (DASH) Questionnaire. The study was carried out in two steps. The first was to translate the DASH into Portuguese and to perform cultural adaptation and the second involved the determination of the reliability and validity of the DASH for the Brazilian population. For this purpose, 65 rheumatoid arthritis patients of either sex (according to the classification criteria of the American College of Rheumatology), ranging in age from 18 to 60 years and presenting no other diseases involving the upper limbs, were interviewed. The patients were selected consecutively at the rheumatology outpatient clinic of UNIFESP. The following results were obtained: in the first step (translation and cultural adaptation), all patients answered the questions. In the second step, Spearman's correlation coefficients for interobserver evaluation ranged from 0.762 to 0.995, values considered to be highly reliable. In addition, intraclass correlation coefficients ranged from 0.97 to 0.99, also highly reliable values. Spearman's correlation coefficients and the intraclass correlation coefficients obtained during intra-observer evaluation ranged from 0.731 to 0.937 and from 0.90 to 0.96, respectively, being highly reliable values. The Ritchie Index showed a weak correlation with Brazilian DASH scores, while the visual analog scale of pain showed a good correlation with DASH score. We conclude that the Portuguese version of the DASH is a reliable instrument.
Resumo:
The reliability and validity of a Portuguese version of the Young Mania Rating Scale were evaluated. The original scale was translated into and adapted to Portuguese by the authors. Definitions of clinical manifestations, a semi-structured anchored interview and more explicit rating criteria were added to the scale. Fifty-five adult subjects, aged 18 to 60 years, with a diagnosis of Current Manic Episode according to DSM-III-R criteria were assessed using the Young Mania Rating Scale as well as the Brief Psychiatric Rating Scale in two sessions held at intervals from 7 to 10 days. Good reliability ratings were obtained, with intra-class correlation coefficient of 0.97 for total scores, and levels of agreement above 0.80 (P < 0.001) for all individual items. Internal consistency analysis resulted in an alpha = 0.67 for the scale as a whole, and an alpha = 0.72 for each standardized item (P < 0.001). For the concurrent validity, a correlation of 0.78 was obtained by the Pearson coefficient between the total scores of the Young Mania Rating Scale and Brief Psychiatric Rating Scale. The results are similar to those reported for the English version, indicating that the Portuguese version of the scale constitutes a reliable and valid instrument for the assessment of manic patients.
Resumo:
In a cross-sectional study conducted four years ago to assess the validity of the Brazilian version of the Eating Attitudes Test-26 (EAT-26) for the identification of abnormal eating behaviors in a population of young females in Southern Brazil, 56 women presented abnormal eating behavior as indicated by the EAT-26 and the Edinburgh Bulimic Investigation Test. They were each matched for age and neighborhood to two normal controls (N = 112) and were re-assessed four years later with the two screening questionnaires plus the Composite International Diagnostic Interview (CIDI). The EAT results were then compared to diagnoses originating from the CIDI. To evaluate the temporal stability of the two screening questionnaires, a test-retest design was applied to estimate kappa coefficients for individual items. Given the prevalence of eating disorders of 6.2%, the CIDI psychiatry interview was applied to 161 women. Of these, 0.6% exhibited anorexia nervosa and 5.6%, bulimia nervosa (10 positive cases). The validity coefficients of the EAT were: 40% sensitivity, 84% specificity, and 14% positive predictive value. Cronbach's coefficient was 0.75. For each EAT item, the kappa index was not higher than 0.344 and the correlation coefficient was lower than 0.488. We conclude that the EAT-26 exhibited low validity coefficients for sensitivity and positive predictive value, and showed a poor temporal stability. It is reasonable to assume that these results were not influenced by the low prevalence of eating disorders in the community. Thus, the results cast doubts on the ability of the EAT-26 test to identify cases of abnormal eating behaviors in this population.
Resumo:
We compared the cost-benefit of two algorithms, recently proposed by the Centers for Disease Control and Prevention, USA, with the conventional one, the most appropriate for the diagnosis of hepatitis C virus (HCV) infection in the Brazilian population. Serum samples were obtained from 517 ELISA-positive or -inconclusive blood donors who had returned to Fundação Pró-Sangue/Hemocentro de São Paulo to confirm previous results. Algorithm A was based on signal-to-cut-off (s/co) ratio of ELISA anti-HCV samples that show s/co ratio ³95% concordance with immunoblot (IB) positivity. For algorithm B, reflex nucleic acid amplification testing by PCR was required for ELISA-positive or -inconclusive samples and IB for PCR-negative samples. For algorithm C, all positive or inconclusive ELISA samples were submitted to IB. We observed a similar rate of positive results with the three algorithms: 287, 287, and 285 for A, B, and C, respectively, and 283 were concordant with one another. Indeterminate results from algorithms A and C were elucidated by PCR (expanded algorithm) which detected two more positive samples. The estimated cost of algorithms A and B was US$21,299.39 and US$32,397.40, respectively, which were 43.5 and 14.0% more economic than C (US$37,673.79). The cost can vary according to the technique used. We conclude that both algorithms A and B are suitable for diagnosing HCV infection in the Brazilian population. Furthermore, algorithm A is the more practical and economical one since it requires supplemental tests for only 54% of the samples. Algorithm B provides early information about the presence of viremia.
Resumo:
This study compared the effectiveness of the multifocal visual evoked cortical potentials (mfVEP) elicited by pattern pulse stimulation with that of pattern reversal in producing reliable responses (signal-to-noise ratio >1.359). Participants were 14 healthy subjects. Visual stimulation was obtained using a 60-sector dartboard display consisting of 6 concentric rings presented in either pulse or reversal mode. Each sector, consisting of 16 checks at 99% Michelson contrast and 80 cd/m² mean luminance, was controlled by a binary m-sequence in the time domain. The signal-to-noise ratio was generally larger in the pattern reversal than in the pattern pulse mode. The number of reliable responses was similar in the central sectors for the two stimulation modes. At the periphery, pattern reversal showed a larger number of reliable responses. Pattern pulse stimuli performed similarly to pattern reversal stimuli to generate reliable waveforms in R1 and R2. The advantage of using both protocols to study mfVEP responses is their complementarity: in some patients, reliable waveforms in specific sectors may be obtained with only one of the two methods. The joint analysis of pattern reversal and pattern pulse stimuli increased the rate of reliability for central sectors by 7.14% in R1, 5.35% in R2, 4.76% in R3, 3.57% in R4, 2.97% in R5, and 1.78% in R6. From R1 to R4 the reliability to generate mfVEPs was above 70% when using both protocols. Thus, for a very high reliability and thorough examination of visual performance, it is recommended to use both stimulation protocols.
Resumo:
Studies on the assessment of heart rate variability threshold (HRVT) during walking are scarce. We determined the reliability and validity of HRVT assessment during the incremental shuttle walk test (ISWT) in healthy subjects. Thirty-one participants aged 57 ± 9 years (17 females) performed 3 ISWTs. During the 1st and 2nd ISWTs, instantaneous heart rate variability was calculated every 30 s and HRVT was measured. Walking velocity at HRVT in these tests (WV-HRVT1 and WV-HRVT2) was registered. During the 3rd ISWT, physiological responses were assessed. The ventilatory equivalents were used to determine ventilatory threshold (VT) and the WV at VT (WV-VT) was recorded. The difference between WV-HRVT1 and WV-HRVT2 was not statistically significant (median and interquartile range = 4.8; 4.8 to 5.4 vs4.8; 4.2 to 5.4 km/h); the correlation between WV-HRVT1 and WV-HRVT2 was significant (r = 0.84); the intraclass correlation coefficient was high (0.92; 0.82 to 0.96), and the agreement was acceptable (-0.08 km/h; -0.92 to 0.87). The difference between WV-VT and WV-HRVT2 was not statistically significant (4.8; 4.8 to 5.4 vs 4.8; 4.2 to 5.4 km/h) and the agreement was acceptable (0.04 km/h; -1.28 to 1.36). HRVT assessment during walking is a reliable measure and permits the estimation of VT in adults. We suggest the use of the ISWT for the assessment of exercise capacity in middle-aged and older adults.
Resumo:
Our objective is to evaluate the accuracy of three algorithms in differentiating the origins of outflow tract ventricular arrhythmias (OTVAs). This study involved 110 consecutive patients with OTVAs for whom a standard 12-lead surface electrocardiogram (ECG) showed typical left bundle branch block morphology with an inferior axis. All the ECG tracings were retrospectively analyzed using the following three recently published ECG algorithms: 1) the transitional zone (TZ) index, 2) the V2 transition ratio, and 3) V2 R wave duration and R/S wave amplitude indices. Considering all patients, the V2 transition ratio had the highest sensitivity (92.3%), while the R wave duration and R/S wave amplitude indices in V2 had the highest specificity (93.9%). The latter finding had a maximal area under the ROC curve of 0.925. In patients with left ventricular (LV) rotation, the V2 transition ratio had the highest sensitivity (94.1%), while the R wave duration and R/S wave amplitude indices in V2 had the highest specificity (87.5%). The former finding had a maximal area under the ROC curve of 0.892. All three published ECG algorithms are effective in differentiating the origin of OTVAs, while the V2 transition ratio, and the V2 R wave duration and R/S wave amplitude indices are the most sensitive and specific algorithms, respectively. Amongst all of the patients, the V2 R wave duration and R/S wave amplitude algorithm had the maximal area under the ROC curve, but in patients with LV rotation the V2 transition ratio algorithm had the maximum area under the ROC curve.
Electromagnetic and thermal design of a multilevel converter with high power density and reliability
Resumo:
Electric energy demand has been growing constantly as the global population increases. To avoid electric energy shortage, renewable energy sources and energy conservation are emphasized all over the world. The role of power electronics in energy saving and development of renewable energy systems is significant. Power electronics is applied in wind, solar, fuel cell, and micro turbine energy systems for the energy conversion and control. The use of power electronics introduces an energy saving potential in such applications as motors, lighting, home appliances, and consumer electronics. Despite the advantages of power converters, their penetration into the market requires that they have a set of characteristics such as high reliability and power density, cost effectiveness, and low weight, which are dictated by the emerging applications. In association with the increasing requirements, the design of the power converter is becoming more complicated, and thus, a multidisciplinary approach to the modelling of the converter is required. In this doctoral dissertation, methods and models are developed for the design of a multilevel power converter and the analysis of the related electromagnetic, thermal, and reliability issues. The focus is on the design of the main circuit. The electromagnetic model of the laminated busbar system and the IGBT modules is established with the aim of minimizing the stray inductance of the commutation loops that degrade the converter power capability. The circular busbar system is proposed to achieve equal current sharing among parallel-connected devices and implemented in the non-destructive test set-up. In addition to the electromagnetic model, a thermal model of the laminated busbar system is developed based on a lumped parameter thermal model. The temperature and temperature-dependent power losses of the busbars are estimated by the proposed algorithm. The Joule losses produced by non-sinusoidal currents flowing through the busbars in the converter are estimated taking into account the skin and proximity effects, which have a strong influence on the AC resistance of the busbars. The lifetime estimation algorithm was implemented to investigate the influence of the cooling solution on the reliability of the IGBT modules. As efficient cooling solutions have a low thermal inertia, they cause excessive temperature cycling of the IGBTs. Thus, a reliability analysis is required when selecting the cooling solutions for a particular application. The control of the cooling solution based on the use of a heat flux sensor is proposed to reduce the amplitude of the temperature cycles. The developed methods and models are verified experimentally by a laboratory prototype.
Resumo:
Many industrial applications need object recognition and tracking capabilities. The algorithms developed for those purposes are computationally expensive. Yet ,real time performance, high accuracy and small power consumption are essential measures of the system. When all these requirements are combined, hardware acceleration of these algorithms becomes a feasible solution. The purpose of this study is to analyze the current state of these hardware acceleration solutions, which algorithms have been implemented in hardware and what modifications have been done in order to adapt these algorithms to hardware.
Resumo:
Human beings have always strived to preserve their memories and spread their ideas. In the beginning this was always done through human interpretations, such as telling stories and creating sculptures. Later, technological progress made it possible to create a recording of a phenomenon; first as an analogue recording onto a physical object, and later digitally, as a sequence of bits to be interpreted by a computer. By the end of the 20th century technological advances had made it feasible to distribute media content over a computer network instead of on physical objects, thus enabling the concept of digital media distribution. Many digital media distribution systems already exist, and their continued, and in many cases increasing, usage is an indicator for the high interest in their future enhancements and enriching. By looking at these digital media distribution systems, we have identified three main areas of possible improvement: network structure and coordination, transport of content over the network, and the encoding used for the content. In this thesis, our aim is to show that improvements in performance, efficiency and availability can be done in conjunction with improvements in software quality and reliability through the use of formal methods: mathematical approaches to reasoning about software so that we can prove its correctness, together with the desirable properties. We envision a complete media distribution system based on a distributed architecture, such as peer-to-peer networking, in which different parts of the system have been formally modelled and verified. Starting with the network itself, we show how it can be formally constructed and modularised in the Event-B formalism, such that we can separate the modelling of one node from the modelling of the network itself. We also show how the piece selection algorithm in the BitTorrent peer-to-peer transfer protocol can be adapted for on-demand media streaming, and how this can be modelled in Event-B. Furthermore, we show how modelling one peer in Event-B can give results similar to simulating an entire network of peers. Going further, we introduce a formal specification language for content transfer algorithms, and show that having such a language can make these algorithms easier to understand. We also show how generating Event-B code from this language can result in less complexity compared to creating the models from written specifications. We also consider the decoding part of a media distribution system by showing how video decoding can be done in parallel. This is based on formally defined dependencies between frames and blocks in a video sequence; we have shown that also this step can be performed in a way that is mathematically proven correct. Our modelling and proving in this thesis is, in its majority, tool-based. This provides a demonstration of the advance of formal methods as well as their increased reliability, and thus, advocates for their more wide-spread usage in the future.
Resumo:
Currently, the power generation is one of the most significant life aspects for the whole man-kind. Barely one can imagine our life without electricity and thermal energy. Thus, different technologies for producing those types of energy need to be used. Each of those technologies will always have their own advantages and disadvantages. Nevertheless, every technology must satisfy such requirements as efficiency, ecology safety and reliability. In the matter of the power generation with nuclear energy utilization these requirements needs to be highly main-tained, especially since accidents on nuclear power plants may cause very long term deadly consequences. In order to prevent possible disasters related to the accident on a nuclear power plant strong and powerful algorithms were invented in last decades. Such algorithms are able to manage calculations of different physical processes and phenomena of real facilities. How-ever, the results acquired by the computing must be verified with experimental data.
Resumo:
Knowledge seems to need the admixture of de facto reliability and epistemic responsibility. But philosophers have had a hard time in attempting to combine them in order to achieve a satisfactory account of knowledge. In this paper I attempt to find a solution by capitalizing on the real and ubiquitous human phenomenon that is the social dispersal of epistemic labour through time. More precisely, the central objective of the paper is to deliver a novel and plausible social account of knowledge-relevant responsibility and to consider the merits of the proposed combination of reliability and responsibility with respect to certain cases of unreflective epistemic subjects.
Resumo:
The increasing performance of computers has made it possible to solve algorithmically problems for which manual and possibly inaccurate methods have been previously used. Nevertheless, one must still pay attention to the performance of an algorithm if huge datasets are used or if the problem iscomputationally difficult. Two geographic problems are studied in the articles included in this thesis. In the first problem the goal is to determine distances from points, called study points, to shorelines in predefined directions. Together with other in-formation, mainly related to wind, these distances can be used to estimate wave exposure at different areas. In the second problem the input consists of a set of sites where water quality observations have been made and of the results of the measurements at the different sites. The goal is to select a subset of the observational sites in such a manner that water quality is still measured in a sufficient accuracy when monitoring at the other sites is stopped to reduce economic cost. Most of the thesis concentrates on the first problem, known as the fetch length problem. The main challenge is that the two-dimensional map is represented as a set of polygons with millions of vertices in total and the distances may also be computed for millions of study points in several directions. Efficient algorithms are developed for the problem, one of them approximate and the others exact except for rounding errors. The solutions also differ in that three of them are targeted for serial operation or for a small number of CPU cores whereas one, together with its further developments, is suitable also for parallel machines such as GPUs.
Resumo:
Currently, laser scribing is growing material processing method in the industry. Benefits of laser scribing technology are studied for example for improving an efficiency of solar cells. Due high-quality requirement of the fast scribing process, it is important to monitor the process in real time for detecting possible defects during the process. However, there is a lack of studies of laser scribing real time monitoring. Commonly used monitoring methods developed for other laser processes such a laser welding, are sufficient slow and existed applications cannot be implemented in fast laser scribing monitoring. The aim of this thesis is to find a method for laser scribing monitoring with a high-speed camera and evaluate reliability and performance of the developed monitoring system with experiments. The laser used in experiments is an IPG ytterbium pulsed fiber laser with 20 W maximum average power and Scan head optics used in the laser is Scanlab’s Hurryscan 14 II with an f100 tele-centric lens. The camera was connected to laser scanner using camera adapter to follow the laser process. A powerful fully programmable industrial computer was chosen for executing image processing and analysis. Algorithms for defect analysis, which are based on particle analysis, were developed using LabVIEW system design software. The performance of the algorithms was analyzed by analyzing a non-moving image from the scribing line with resolution 960x20 pixel. As a result, the maximum analysis speed was 560 frames per second. Reliability of the algorithm was evaluated by imaging scribing path with a variable number of defects 2000 mm/s when the laser was turned off and image analysis speed was 430 frames per second. The experiment was successful and as a result, the algorithms detected all defects from the scribing path. The final monitoring experiment was performed during a laser process. However, it was challenging to get active laser illumination work with the laser scanner due physical dimensions of the laser lens and the scanner. For reliable error detection, the illumination system is needed to be replaced.