978 resultados para Field Testing
Evaluation of the treadmill stress testing for risk stratification after acute myocardial infarction
Resumo:
Dissertação de mestrado em Biofísica e Bionanossistemas
Resumo:
OBJETIVE: The evaluation, by exercise stress testing, of the cardiorespiratory effects of pyridostigmine (PYR), a reversible acetylcholinesterase inhibitor. METHODS: A double-blind, randomized, cross-over, placebo-controlled comparison of hemodynamic and ventilation variables of 10 healthy subjects who underwent three exercise stress tests (the first for adaptation and determination of tolerance to exercise, the other two after administration of placebo or 45mg of PYR). RESULTS: Heart rate at rest was: 68±3 vs 68±3bpm before and after placebo, respectively (P=0.38); 70±2 vs 59±2bpm, before and after pyridostigmine, respectively (P<0.01). During exercise, relative to placebo: a significantly lower heart rate after PYR at, respectively, 20% (P=0.02), 40% (P=0.03), 80% (P=0.05) and 100% (P=0.02) of peak effort was observed. No significant differences were observed in arterial blood pressure, oxygen consumption at submaximal and maximal effort, exercise duration, respiratory ratio, CO2 production, ventilation threshold, minute ventilation, and oxygen pulse. CONCLUSION: Pyridostigmine, at a dose of 45mg, decreases heart rate at rest and during exercise, with minimal side effects and without interfering with exercise tolerance and ventilation variables.
Resumo:
OBJECTIVE: To evaluate the influences of circadian variations on tilt-table testing (TTT) results by comparing the positivity rate of the test performed during the morning with that of the test performed in the afternoon and to evaluate the reproducibility of the results in different periods of the day. METHODS: One hundred twenty-three patients with recurrent unexplained syncope or near-syncope referred for TTT were randomized into 2 groups. In group I, 68 patients, TTT was performed first in the afternoon and then in the morning. In group II, 55 patients, the test was performed first in the morning and then in the afternoon. RESULTS: The TTT protocol was the prolonged passive test, without drug sensitization. Twenty-nine (23.5%) patients had a positive result in at least one of the periods. The positivity rate for each period was similar: 20 (16.2%) patients in the afternoon and 19 (15.4%) in the morning (p=1.000). Total reproducibility (positive/positive and negative/negative) was observed in 49 (89%) patients in group I and in 55 (81%) in group II. Reproducibility of the results was obtained in 94 (90.4%) patients with first negative tests but in 10 (34%) patients with first positive tests. CONCLUSION: TTT could be performed during any period of the day, and even in the 2 periods to enhance positivity. Considering the low reproducibility rate of the positive tests, serial TTT to evaluate therapeutic efficacy should be performed during the same period of the day.
Resumo:
The job of health professionals, including nurses, is considered inherently stressful (Lee & Wang, 2002; Rutledge et al., 2009), and thus it is important to improve and develop specific measures that are sensitive to the demands that health professionals face. This study analysed the psychometric properties of three instruments that focus on the professional experiences of nurses in aspects related to occupational stress, cognitive appraisal, and mental health issues. The evaluation protocol included the Stress Questionnaire for Health Professionals (SQHP; Gomes, 2014), the Cognitive Appraisal Scale (CAS; Gomes, Faria, & Gonçalves, 2013), and the General Health Questionnaire-12 (GHQ-12; Goldberg, 1972). Validity and reliability issues were considered with statistical analysis (i.e. confirmatory factor analysis, convergent validity, and composite reliability) that revealed adequate values for all of the instruments, namely, a six-factor structure for the SQHP, a five-factor structure for the CAS, and a two-factor structure for the GHQ-12. In conclusion, this study proposes three consistent instruments that may be useful for analysing nurses’ adaptation to work contexts.
Resumo:
OBJECTIVE: To compare blood pressure response to dynamic exercise in hypertensive patients taking trandolapril or captopril. METHODS: We carried out a prospective, randomized, blinded study with 40 patients with primary hypertension and no other associated disease. The patients were divided into 2 groups (n=20), paired by age, sex, race, and body mass index, and underwent 2 symptom-limited exercise tests on a treadmill before and after 30 days of treatment with captopril (75 to 150 mg/day) or trandolapril (2 to 4 mg/day). RESULTS: The groups were similar prior to treatment (p<0.05), and both drugs reduced blood pressure at rest (p<0.001). During treatment, trandolapril caused a greater increase in functional capacity (+31%) than captopril (+17%; p=0.01) did, and provided better blood pressure control during exercise, observed as a reduction in the variation of systolic blood pressure/MET (trandolapril: 10.7±1.9 mmHg/U vs 7.4±1.2 mmHg/U, p=0.02; captopril: 9.1±1.4 mmHg/U vs 11.4±2.5 mmHg/U, p=0.35), a reduction in peak diastolic blood pressure (trandolapril: 116.8±3.1 mmHg vs 108.1±2.5 mmHg, p=0.003; captopril: 118.2±3.1 mmHg vs 115.8±3.3 mmHg, p=0.35), and a reduction in the interruption of the tests due to excessive elevation in blood pressure (trandolapril: 50% vs 15%, p=0.009; captopril: 50% vs 45%, p=0.32). CONCLUSION: Monotherapy with trandolapril is more effective than that with captopril to control blood pressure during exercise in hypertensive patients.
Resumo:
OBJECTIVE: To assess safety, feasibility, and the results of early exercise testing in patients with chest pain admitted to the emergency room of the chest pain unit, in whom acute myocardial infarction and high-risk unstable angina had been ruled out. METHODS: A study including 1060 consecutive patients with chest pain admitted to the emergency room of the chest pain unit was carried out. Of them, 677 (64%) patients were eligible for exercise testing, but only 268 (40%) underwent the test. RESULTS: The mean age of the patients studied was 51.7±12.1 years, and 188 (70%) were males. Twenty-eight (10%) patients had a previous history of coronary artery disease, 244 (91%) had a normal or unspecific electrocardiogram, and 150 (56%) underwent exercise testing within a 12-hour interval. The results of the exercise test in the latter group were as follows: 34 (13%) were positive, 191 (71%) were negative, and 43 (16%) were inconclusive. In the group of patients with a positive exercise test, 21 (62%) underwent coronary angiography, 11 underwent angioplasty, and 2 underwent myocardial revascularization. In a univariate analysis, type A/B chest pain (definitely/probably anginal) (p<0.0001), previous coronary artery disease (p<0.0001), and route 2 (patients at higher risk) correlated with a positive or inconclusive test (p<0.0001). CONCLUSION: In patients with chest pain and in whom acute myocardial infarction and high-risk unstable angina had been ruled out, the exercise test proved to be feasible, safe, and well tolerated.
Resumo:
v.13:pt.2:no.1(1918)
Resumo:
v.13:pt.2:no.2(1919)
Resumo:
El avance en la potencia de cómputo en nuestros días viene dado por la paralelización del procesamiento, dadas las características que disponen las nuevas arquitecturas de hardware. Utilizar convenientemente este hardware impacta en la aceleración de los algoritmos en ejecución (programas). Sin embargo, convertir de forma adecuada el algoritmo en su forma paralela es complejo, y a su vez, esta forma, es específica para cada tipo de hardware paralelo. En la actualidad los procesadores de uso general más comunes son los multicore, procesadores paralelos, también denominados Symmetric Multi-Processors (SMP). Hoy en día es difícil hallar un procesador para computadoras de escritorio que no tengan algún tipo de paralelismo del caracterizado por los SMP, siendo la tendencia de desarrollo, que cada día nos encontremos con procesadores con mayor numero de cores disponibles. Por otro lado, los dispositivos de procesamiento de video (Graphics Processor Units - GPU), a su vez, han ido desarrollando su potencia de cómputo por medio de disponer de múltiples unidades de procesamiento dentro de su composición electrónica, a tal punto que en la actualidad no es difícil encontrar placas de GPU con capacidad de 200 a 400 hilos de procesamiento paralelo. Estos procesadores son muy veloces y específicos para la tarea que fueron desarrollados, principalmente el procesamiento de video. Sin embargo, como este tipo de procesadores tiene muchos puntos en común con el procesamiento científico, estos dispositivos han ido reorientándose con el nombre de General Processing Graphics Processor Unit (GPGPU). A diferencia de los procesadores SMP señalados anteriormente, las GPGPU no son de propósito general y tienen sus complicaciones para uso general debido al límite en la cantidad de memoria que cada placa puede disponer y al tipo de procesamiento paralelo que debe realizar para poder ser productiva su utilización. Los dispositivos de lógica programable, FPGA, son dispositivos capaces de realizar grandes cantidades de operaciones en paralelo, por lo que pueden ser usados para la implementación de algoritmos específicos, aprovechando el paralelismo que estas ofrecen. Su inconveniente viene derivado de la complejidad para la programación y el testing del algoritmo instanciado en el dispositivo. Ante esta diversidad de procesadores paralelos, el objetivo de nuestro trabajo está enfocado en analizar las características especificas que cada uno de estos tienen, y su impacto en la estructura de los algoritmos para que su utilización pueda obtener rendimientos de procesamiento acordes al número de recursos utilizados y combinarlos de forma tal que su complementación sea benéfica. Específicamente, partiendo desde las características del hardware, determinar las propiedades que el algoritmo paralelo debe tener para poder ser acelerado. Las características de los algoritmos paralelos determinará a su vez cuál de estos nuevos tipos de hardware son los mas adecuados para su instanciación. En particular serán tenidos en cuenta el nivel de dependencia de datos, la necesidad de realizar sincronizaciones durante el procesamiento paralelo, el tamaño de datos a procesar y la complejidad de la programación paralela en cada tipo de hardware. Today´s advances in high-performance computing are driven by parallel processing capabilities of available hardware architectures. These architectures enable the acceleration of algorithms when thes ealgorithms are properly parallelized and exploit the specific processing power of the underneath architecture. Most current processors are targeted for general pruposes and integrate several processor cores on a single chip, resulting in what is known as a Symmetric Multiprocessing (SMP) unit. Nowadays even desktop computers make use of multicore processors. Meanwhile, the industry trend is to increase the number of integrated rocessor cores as technology matures. On the other hand, Graphics Processor Units (GPU), originally designed to handle only video processing, have emerged as interesting alternatives to implement algorithm acceleration. Current available GPUs are able to implement from 200 to 400 threads for parallel processing. Scientific computing can be implemented in these hardware thanks to the programability of new GPUs that have been denoted as General Processing Graphics Processor Units (GPGPU).However, GPGPU offer little memory with respect to that available for general-prupose processors; thus, the implementation of algorithms need to be addressed carefully. Finally, Field Programmable Gate Arrays (FPGA) are programmable devices which can implement hardware logic with low latency, high parallelism and deep pipelines. Thes devices can be used to implement specific algorithms that need to run at very high speeds. However, their programmability is harder that software approaches and debugging is typically time-consuming. In this context where several alternatives for speeding up algorithms are available, our work aims at determining the main features of thes architectures and developing the required know-how to accelerate algorithm execution on them. We look at identifying those algorithms that may fit better on a given architecture as well as compleme
Resumo:
v.4(1935)
Resumo:
v.5(1940)
Resumo:
Stand alone solar powered refrigeration and water desalination, two of the most popular and sought after applications of solar energy systems, have been selected as the topic of research for the works presented in this thesis. The water desalination system based on evaporation and condensation was found to be the most suitable one to be powered by solar energy. It has been established that highoutput fast-response solar heat collectors used to achieve high rates of evaporation and reliable solar powered cooling system for faster rates of condensation are the most important factors in achieving increased outputs in solar powered desalination systems. Comprehensive reviews of Solar powered cooling/refrigeration and also water desalination techniques have been presented. In view of the fact that the Institute of Technology, Sligo has a well-established long history of research and development in the production of state of the art high-efficiency fast-response evacuated solar heat collectors it was decided to use this know how in the work described in this thesis. For this reason achieving high rates of evaporation was not a problem. It was, therefore, the question of the solar powered refrigeration that was envisaged to be used in the solar powered desalination tofacilitate rapid condensation of the evaporated water that had to be addressed first. The principles of various solar powered refrigeration techniques have also been reviewed. The first step in work on solar powered refrigeration was to successfully modify a conventional refrigerator working on Platen-Munters design to be powered by highoutput fast-response evacuated solar heat collectors. In this work, which was the first ever successful attempt in the field, temperatures as low as —19°C were achieved in the icebox. A new approach in the use of photovoltaic technology to power a conventional domestic refrigerator was also attempted. This was done by modifying a conventional domestic refrigerator to be powered by photovoltaic panels in the most efficient way. In the system developed and successfully tested in this approach, the power demand has been reduced phenomenally and it is possible to achieve 48 hours of cooling power with exposure to just 7 hours of sunshine. The successful development of the first ever multi-cycle intermittent solar powered icemaker is without doubt the most exciting breakthrough in the work described in this thesis. Output of 74.3kg of ice per module with total exposure area of 2.88 m2, or 25.73kg per m2, per day is a major improvement in comparison to about 5-6kg of ice per m2 per day reported for all the single cycle intermittent systems. This system has then become the basis for the development of a new solar powered refrigeration system with even higher output, named the “composite” system described in this thesis. Another major breakthrough associated with the works described in this thesis is the successful development and testing of the high-output water desalination system. This system that uses a combination of the high-output fast-response evacuated solar heat collectors and the multi-cycle icemaker. The system is capable of producing a maximum of 141 litres of distilled water per day per module which has an exposure area of 3.24m2, or a production rate of 43.5 litres per m2 per day. Once again when this result is compared to the reported daily output of 5 litres of desalinated water per m per day the significance of this piece of work becomes apparent. In the presentation of many of the components and systems described in this thesis CAD parametric solid modelling has been used instead of photographs to illustrate them more clearly. The multi-cycle icemaker and the high-output desalination systems are the subject of two patent applications.