993 resultados para automated testing
Resumo:
OBJECTIVE: To evaluate the influences of circadian variations on tilt-table testing (TTT) results by comparing the positivity rate of the test performed during the morning with that of the test performed in the afternoon and to evaluate the reproducibility of the results in different periods of the day. METHODS: One hundred twenty-three patients with recurrent unexplained syncope or near-syncope referred for TTT were randomized into 2 groups. In group I, 68 patients, TTT was performed first in the afternoon and then in the morning. In group II, 55 patients, the test was performed first in the morning and then in the afternoon. RESULTS: The TTT protocol was the prolonged passive test, without drug sensitization. Twenty-nine (23.5%) patients had a positive result in at least one of the periods. The positivity rate for each period was similar: 20 (16.2%) patients in the afternoon and 19 (15.4%) in the morning (p=1.000). Total reproducibility (positive/positive and negative/negative) was observed in 49 (89%) patients in group I and in 55 (81%) in group II. Reproducibility of the results was obtained in 94 (90.4%) patients with first negative tests but in 10 (34%) patients with first positive tests. CONCLUSION: TTT could be performed during any period of the day, and even in the 2 periods to enhance positivity. Considering the low reproducibility rate of the positive tests, serial TTT to evaluate therapeutic efficacy should be performed during the same period of the day.
Resumo:
The job of health professionals, including nurses, is considered inherently stressful (Lee & Wang, 2002; Rutledge et al., 2009), and thus it is important to improve and develop specific measures that are sensitive to the demands that health professionals face. This study analysed the psychometric properties of three instruments that focus on the professional experiences of nurses in aspects related to occupational stress, cognitive appraisal, and mental health issues. The evaluation protocol included the Stress Questionnaire for Health Professionals (SQHP; Gomes, 2014), the Cognitive Appraisal Scale (CAS; Gomes, Faria, & Gonçalves, 2013), and the General Health Questionnaire-12 (GHQ-12; Goldberg, 1972). Validity and reliability issues were considered with statistical analysis (i.e. confirmatory factor analysis, convergent validity, and composite reliability) that revealed adequate values for all of the instruments, namely, a six-factor structure for the SQHP, a five-factor structure for the CAS, and a two-factor structure for the GHQ-12. In conclusion, this study proposes three consistent instruments that may be useful for analysing nurses’ adaptation to work contexts.
Resumo:
OBJECTIVE: To assess the Dixtal DX2710 automated oscillometric device used for blood pressure measurement according to the protocols of the BHS and the AAMI. METHODS: Three blood pressure measurements were taken in 94 patients (53 females 15 to 80 years). The measurements were taken randomly by 2 observers trained to measure blood pressure with a mercury column device connected with an automated device. The device was classified according to the protocols of the BHS and AAMI. RESULT: The mean of blood pressure levels obtained by the observers was 148±38/93±25 mmHg and that obtained with the device was 148±37/89±26 mmHg. Considering the differences between the measurements obtained by the observer and those obtained with the automated device according to the criteria of the BHS, the following classification was adopted: "A" for systolic pressure (69% of the differences < 5; 90% < 10; and 97% < 15 mmHg); and "B" for diastolic pressure (63% of the differences < 5; 83% < 10; and 93% < 15 mmHg). The mean and standard deviation of the differences were 0±6.27 mmHg for systolic pressure and 3.82±6.21 mmHg for diastolic pressure. CONCLUSION: The Dixtal DX2710 device was approved according to the international recommendations.
Resumo:
OBJECTIVE: To compare blood pressure response to dynamic exercise in hypertensive patients taking trandolapril or captopril. METHODS: We carried out a prospective, randomized, blinded study with 40 patients with primary hypertension and no other associated disease. The patients were divided into 2 groups (n=20), paired by age, sex, race, and body mass index, and underwent 2 symptom-limited exercise tests on a treadmill before and after 30 days of treatment with captopril (75 to 150 mg/day) or trandolapril (2 to 4 mg/day). RESULTS: The groups were similar prior to treatment (p<0.05), and both drugs reduced blood pressure at rest (p<0.001). During treatment, trandolapril caused a greater increase in functional capacity (+31%) than captopril (+17%; p=0.01) did, and provided better blood pressure control during exercise, observed as a reduction in the variation of systolic blood pressure/MET (trandolapril: 10.7±1.9 mmHg/U vs 7.4±1.2 mmHg/U, p=0.02; captopril: 9.1±1.4 mmHg/U vs 11.4±2.5 mmHg/U, p=0.35), a reduction in peak diastolic blood pressure (trandolapril: 116.8±3.1 mmHg vs 108.1±2.5 mmHg, p=0.003; captopril: 118.2±3.1 mmHg vs 115.8±3.3 mmHg, p=0.35), and a reduction in the interruption of the tests due to excessive elevation in blood pressure (trandolapril: 50% vs 15%, p=0.009; captopril: 50% vs 45%, p=0.32). CONCLUSION: Monotherapy with trandolapril is more effective than that with captopril to control blood pressure during exercise in hypertensive patients.
Resumo:
OBJECTIVE: To assess safety, feasibility, and the results of early exercise testing in patients with chest pain admitted to the emergency room of the chest pain unit, in whom acute myocardial infarction and high-risk unstable angina had been ruled out. METHODS: A study including 1060 consecutive patients with chest pain admitted to the emergency room of the chest pain unit was carried out. Of them, 677 (64%) patients were eligible for exercise testing, but only 268 (40%) underwent the test. RESULTS: The mean age of the patients studied was 51.7±12.1 years, and 188 (70%) were males. Twenty-eight (10%) patients had a previous history of coronary artery disease, 244 (91%) had a normal or unspecific electrocardiogram, and 150 (56%) underwent exercise testing within a 12-hour interval. The results of the exercise test in the latter group were as follows: 34 (13%) were positive, 191 (71%) were negative, and 43 (16%) were inconclusive. In the group of patients with a positive exercise test, 21 (62%) underwent coronary angiography, 11 underwent angioplasty, and 2 underwent myocardial revascularization. In a univariate analysis, type A/B chest pain (definitely/probably anginal) (p<0.0001), previous coronary artery disease (p<0.0001), and route 2 (patients at higher risk) correlated with a positive or inconclusive test (p<0.0001). CONCLUSION: In patients with chest pain and in whom acute myocardial infarction and high-risk unstable angina had been ruled out, the exercise test proved to be feasible, safe, and well tolerated.
Resumo:
Identificación y caracterización del problema. Uno de los problemas más importantes asociados con la construcción de software es la corrección del mismo. En busca de proveer garantías del correcto funcionamiento del software, han surgido una variedad de técnicas de desarrollo con sólidas bases matemáticas y lógicas conocidas como métodos formales. Debido a su naturaleza, la aplicación de métodos formales requiere gran experiencia y conocimientos, sobre todo en lo concerniente a matemáticas y lógica, por lo cual su aplicación resulta costosa en la práctica. Esto ha provocado que su principal aplicación se limite a sistemas críticos, es decir, sistemas cuyo mal funcionamiento puede causar daños de magnitud, aunque los beneficios que sus técnicas proveen son relevantes a todo tipo de software. Poder trasladar los beneficios de los métodos formales a contextos de desarrollo de software más amplios que los sistemas críticos tendría un alto impacto en la productividad en tales contextos. Hipótesis. Contar con herramientas de análisis automático es un elemento de gran importancia. Ejemplos de esto son varias herramientas potentes de análisis basadas en métodos formales, cuya aplicación apunta directamente a código fuente. En la amplia mayoría de estas herramientas, la brecha entre las nociones a las cuales están acostumbrados los desarrolladores y aquellas necesarias para la aplicación de estas herramientas de análisis formal sigue siendo demasiado amplia. Muchas herramientas utilizan lenguajes de aserciones que escapan a los conocimientos y las costumbres usuales de los desarrolladores. Además, en muchos casos la salida brindada por la herramienta de análisis requiere cierto manejo del método formal subyacente. Este problema puede aliviarse mediante la producción de herramientas adecuadas. Otro problema intrínseco a las técnicas automáticas de análisis es cómo se comportan las mismas a medida que el tamaño y complejidad de los elementos a analizar crece (escalabilidad). Esta limitación es ampliamente conocida y es considerada crítica en la aplicabilidad de métodos formales de análisis en la práctica. Una forma de atacar este problema es el aprovechamiento de información y características de dominios específicos de aplicación. Planteo de objetivos. Este proyecto apunta a la construcción de herramientas de análisis formal para contribuir a la calidad, en cuanto a su corrección funcional, de especificaciones, modelos o código, en el contexto del desarrollo de software. Más precisamente, se busca, por un lado, identificar ambientes específicos en los cuales ciertas técnicas de análisis automático, como el análisis basado en SMT o SAT solving, o el model checking, puedan llevarse a niveles de escalabilidad superiores a los conocidos para estas técnicas en ámbitos generales. Se intentará implementar las adaptaciones a las técnicas elegidas en herramientas que permitan su uso a desarrolladores familiarizados con el contexto de aplicación, pero no necesariamente conocedores de los métodos o técnicas subyacentes. Materiales y métodos a utilizar. Los materiales a emplear serán bibliografía relevante al área y equipamiento informático. Métodos. Se emplearán los métodos propios de la matemática discreta, la lógica y la ingeniería de software. Resultados esperados. Uno de los resultados esperados del proyecto es la individualización de ámbitos específicos de aplicación de métodos formales de análisis. Se espera que como resultado del desarrollo del proyecto surjan herramientas de análisis cuyo nivel de usabilidad sea adecuado para su aplicación por parte de desarrolladores sin formación específica en los métodos formales utilizados. Importancia del proyecto. El principal impacto de este proyecto será la contribución a la aplicación práctica de técnicas formales de análisis en diferentes etapas del desarrollo de software, con la finalidad de incrementar su calidad y confiabilidad. A crucial factor for software quality is correcteness. Traditionally, formal approaches to software development concentrate on functional correctness, and tackle this problem basically by being based on well defined notations founded on solid mathematical grounds. This makes formal methods better suited for analysis, due to their precise semantics, but they are usually more complex, and require familiarity and experience with the manipulation of mathematical definitions. So, their acceptance by software engineers is rather restricted, and formal methods applications have been confined to critical systems. Nevertheless, it is obvious that the advantages that formal methods provide apply to any kind of software system. It is accepted that appropriate software tool support for formal analysis is essential, if one seeks providing support for software development based on formal methods. Indeed, some of the relatively recent sucesses of formal methods are accompanied by good quality tools that automate powerful analysis mechanisms, and are even integrated in widely used development environments. Still, most of these tools either concentrate on code analysis, and in many cases are still far from being simple enough to be employed by software engineers without experience in formal methods. Another important problem for the adoption of tool support for formal methods is scalability. Automated software analysis is intrinsically complex, and thus techniques do not scale well in the general case. In this project, we will attempt to identify particular modelling, design, specification or coding activities in software development processes where to apply automated formal analysis techniques. By focusing in very specific application domains, we expect to find characteristics that might be exploited to increase the scalability of the corresponding analyses, compared to the general case.
Resumo:
As digital imaging processing techniques become increasingly used in a broad range of consumer applications, the critical need to evaluate algorithm performance has become recognised by developers as an area of vital importance. With digital image processing algorithms now playing a greater role in security and protection applications, it is of crucial importance that we are able to empirically study their performance. Apart from the field of biometrics little emphasis has been put on algorithm performance evaluation until now and where evaluation has taken place, it has been carried out in a somewhat cumbersome and unsystematic fashion, without any standardised approach. This paper presents a comprehensive testing methodology and framework aimed towards automating the evaluation of image processing algorithms. Ultimately, the test framework aims to shorten the algorithm development life cycle by helping to identify algorithm performance problems quickly and more efficiently.
Resumo:
Univariate statistical control charts, such as the Shewhart chart, do not satisfy the requirements for process monitoring on a high volume automated fuel cell manufacturing line. This is because of the number of variables that require monitoring. The risk of elevated false alarms, due to the nature of the process being high volume, can present problems if univariate methods are used. Multivariate statistical methods are discussed as an alternative for process monitoring and control. The research presented is conducted on a manufacturing line which evaluates the performance of a fuel cell. It has three stages of production assembly that contribute to the final end product performance. The product performance is assessed by power and energy measurements, taken at various time points throughout the discharge testing of the fuel cell. The literature review performed on these multivariate techniques are evaluated using individual and batch observations. Modern techniques using multivariate control charts on Hotellings T2 are compared to other multivariate methods, such as Principal Components Analysis (PCA). The latter, PCA, was identified as the most suitable method. Control charts such as, scores, T2 and DModX charts, are constructed from the PCA model. Diagnostic procedures, using Contribution plots, for out of control points that are detected using these control charts, are also discussed. These plots enable the investigator to perform root cause analysis. Multivariate batch techniques are compared to individual observations typically seen on continuous processes. Recommendations, for the introduction of multivariate techniques that would be appropriate for most high volume processes, are also covered.
Resumo:
The research described in this thesis was developed as part o f the Information Management for Green Design (IMA GREE) Project. The 1MAGREE Project was founded by Enterprise Ireland under a Strategic Research Grant Scheme as a partnership project between Galway Mayo Institute o f Technology and C1MRU University College Galway. The project aimed to develop a CAD integrated software tool to support environmental information management for design, particularly for the electronics-manufacturing sector in Ireland.
Resumo:
This is a study of a state of the art implementation of a new computer integrated testing (CIT) facility within a company that designs and manufactures transport refrigeration systems. The aim was to use state of the art hardware, software and planning procedures in the design and implementation of three CIT systems. Typical CIT system components include data acquisition (DAQ) equipment, application and analysis software, communication devices, computer-based instrumentation and computer technology. It is shown that the introduction of computer technology into the area of testing can have a major effect on such issues as efficiency, flexibility, data accuracy, test quality, data integrity and much more. Findings reaffirm how the overall area of computer integration continues to benefit any organisation, but with more recent advances in computer technology, communication methods and software capabilities, less expensive more sophisticated test solutions are now possible. This allows more organisations to benefit from the many advantages associated with CIT. Examples of computer integration test set-ups and the benefits associated with computer integration have been discussed.
Resumo:
Seismic analysis, horizon matching, fault tracking, marked point process,stochastic annealing
Resumo:
Abstract ST2 is a member of the interleukin-1 receptor family biomarker and circulating soluble ST2 concentrations are believed to reflect cardiovascular stress and fibrosis. Recent studies have demonstrated soluble ST2 to be a strong predictor of cardiovascular outcomes in both chronic and acute heart failure. It is a new biomarker that meets all required criteria for a useful biomarker. Of note, it adds information to natriuretic peptides (NPs) and some studies have shown it is even superior in terms of risk stratification. Since the introduction of NPs, this has been the most promising biomarker in the field of heart failure and might be particularly useful as therapy guide.
Resumo:
This work describes a test tool that allows to make performance tests of different end-to-end available bandwidth estimation algorithms along with their different implementations. The goal of such tests is to find the best-performing algorithm and its implementation and use it in congestion control mechanism for high-performance reliable transport protocols. The main idea of this paper is to describe the options which provide available bandwidth estimation mechanism for highspeed data transport protocols and to develop basic functionality of such test tool with which it will be possible to manage entities of test application on all involved testing hosts, aided by some middleware.
Resumo:
Today, usability testing in the development of software and systems is essential. A stationary usability lab offers many different possibilities in the evaluation of usability, but it reaches its limits in terms of flexibility and the experimental conditions. Mobile usability studies consider consciously outside influences, and these studies require a specially adapted approach to preparation, implementation and evaluation. Using the example of a mobile eye tracking study the difficulties and the opportunities of mobile testing are considered.