975 resultados para SPECIFIC ERGOMETRIC TESTING
Resumo:
Tese de Doutoramento em Psicologia Aplicada.
Resumo:
OBJECTIVE: To evaluate the influences of circadian variations on tilt-table testing (TTT) results by comparing the positivity rate of the test performed during the morning with that of the test performed in the afternoon and to evaluate the reproducibility of the results in different periods of the day. METHODS: One hundred twenty-three patients with recurrent unexplained syncope or near-syncope referred for TTT were randomized into 2 groups. In group I, 68 patients, TTT was performed first in the afternoon and then in the morning. In group II, 55 patients, the test was performed first in the morning and then in the afternoon. RESULTS: The TTT protocol was the prolonged passive test, without drug sensitization. Twenty-nine (23.5%) patients had a positive result in at least one of the periods. The positivity rate for each period was similar: 20 (16.2%) patients in the afternoon and 19 (15.4%) in the morning (p=1.000). Total reproducibility (positive/positive and negative/negative) was observed in 49 (89%) patients in group I and in 55 (81%) in group II. Reproducibility of the results was obtained in 94 (90.4%) patients with first negative tests but in 10 (34%) patients with first positive tests. CONCLUSION: TTT could be performed during any period of the day, and even in the 2 periods to enhance positivity. Considering the low reproducibility rate of the positive tests, serial TTT to evaluate therapeutic efficacy should be performed during the same period of the day.
Resumo:
OBJECTIVE: To compare blood pressure response to dynamic exercise in hypertensive patients taking trandolapril or captopril. METHODS: We carried out a prospective, randomized, blinded study with 40 patients with primary hypertension and no other associated disease. The patients were divided into 2 groups (n=20), paired by age, sex, race, and body mass index, and underwent 2 symptom-limited exercise tests on a treadmill before and after 30 days of treatment with captopril (75 to 150 mg/day) or trandolapril (2 to 4 mg/day). RESULTS: The groups were similar prior to treatment (p<0.05), and both drugs reduced blood pressure at rest (p<0.001). During treatment, trandolapril caused a greater increase in functional capacity (+31%) than captopril (+17%; p=0.01) did, and provided better blood pressure control during exercise, observed as a reduction in the variation of systolic blood pressure/MET (trandolapril: 10.7±1.9 mmHg/U vs 7.4±1.2 mmHg/U, p=0.02; captopril: 9.1±1.4 mmHg/U vs 11.4±2.5 mmHg/U, p=0.35), a reduction in peak diastolic blood pressure (trandolapril: 116.8±3.1 mmHg vs 108.1±2.5 mmHg, p=0.003; captopril: 118.2±3.1 mmHg vs 115.8±3.3 mmHg, p=0.35), and a reduction in the interruption of the tests due to excessive elevation in blood pressure (trandolapril: 50% vs 15%, p=0.009; captopril: 50% vs 45%, p=0.32). CONCLUSION: Monotherapy with trandolapril is more effective than that with captopril to control blood pressure during exercise in hypertensive patients.
Resumo:
OBJECTIVE: To assess safety, feasibility, and the results of early exercise testing in patients with chest pain admitted to the emergency room of the chest pain unit, in whom acute myocardial infarction and high-risk unstable angina had been ruled out. METHODS: A study including 1060 consecutive patients with chest pain admitted to the emergency room of the chest pain unit was carried out. Of them, 677 (64%) patients were eligible for exercise testing, but only 268 (40%) underwent the test. RESULTS: The mean age of the patients studied was 51.7±12.1 years, and 188 (70%) were males. Twenty-eight (10%) patients had a previous history of coronary artery disease, 244 (91%) had a normal or unspecific electrocardiogram, and 150 (56%) underwent exercise testing within a 12-hour interval. The results of the exercise test in the latter group were as follows: 34 (13%) were positive, 191 (71%) were negative, and 43 (16%) were inconclusive. In the group of patients with a positive exercise test, 21 (62%) underwent coronary angiography, 11 underwent angioplasty, and 2 underwent myocardial revascularization. In a univariate analysis, type A/B chest pain (definitely/probably anginal) (p<0.0001), previous coronary artery disease (p<0.0001), and route 2 (patients at higher risk) correlated with a positive or inconclusive test (p<0.0001). CONCLUSION: In patients with chest pain and in whom acute myocardial infarction and high-risk unstable angina had been ruled out, the exercise test proved to be feasible, safe, and well tolerated.
Resumo:
A prevalência de pessoas que referem dor no complexo articular do ombro, com concomitante limitação na capacidade para realizar atividades da vida diária, é elevada. Estes níveis de prevalência sobrecarregam quer os utentes, como a própria sociedade. A evidência científica atual indicia a existência de uma relação entre as alterações da articulação escápulo-torácica e as patologias associadas à articulação gleno-umeral. A capacidade de quantificar, cinemática e cineticamente, as disfunções ao nível das articulações escápulo-torácica e gleno-umeral, é algo de enorme importância, quer para a comunidade biomecânica, como para a clínica. No decorrer dos trabalhos desta tese foi desenvolvido, através do software OpenSim, um modelo tridimensional músculo-esquelético do complexo articular do ombro que inclui a representação do tórax/coluna, clavícula, omoplata, úmero, rádio, cúbito e articulações que permitem os movimentos relativos desses segmentos, assim como, 16 músculos e 4 ligamentos. Com um total de 11 graus de liberdade, incluindo um novo modelo articular escápulo-torácico, os resultados demonstram que este é capaz de reconstruir de forma precisa e rápida os movimentos escápulo-torácicos e glenoumerais, recorrendo para tal, à cinemática inversa, e à dinâmica inversa e direta. Conta ainda com um método de transformação inovador para determinar, com base nas especificidades dos sujeitos, os locais de inserção muscular. As principais motivações subjacentes ao desenvolvimento desta tese foram contribuir para o aprofundar do atual conhecimento sobre as disfunções do complexo articular do ombro e, simultaneamente, proporcionar à comunidade clínica uma ferramenta biomecânica de livre acesso com o intuito de melhor suportar as decisões clínicas e dessa forma concorrer para uma prática mais efetiva.
Resumo:
La síntesis de materiales cristalinos micro y mesoporosos con incorporación de micro/nano partículas/clusters de especies formadas con entidades propias interaccionando con las redes, como óxidos de metales, cationes de neutralización, especies metálicas, etc., pueden potencialmente ser utilizados como "materiales hospedaje" en óptica, electrónica, sensores, como materiales magnéticos, en estrategias ambientales de control de la contaminación, catálisis en general y procesos de separación. Se sintetizaran y caracterizaran por diversas técnicas fisicoquímicas, zeolitas microporosas de poro medio (ZSM) y poro grande (Y), y materiales mesoporosos (MCM-41). La aplicación de los mismos se orientara, por una parte, a procesos catalíticos tecnológicamente innovadores relacionados con los siguientes campos: a)catálisis ambiental: transformación de desechos plásticos (polietileno, polipropileno, poliestireno o mezclas de los mismos) a hidrocarburos de mayor valor agregado (gasolinas, gasoil, gases licuados de petróleo, hidrocarburos aromáticos); b)química fina: oxidación parcial de hidrocarburos aromáticos hacia la obtención de commodities, fármacos, etc. Por otra parte, se evaluaran las propiedades magnéticas (ferromagnetismo, paramagnetismo, superparamagnetismo, diamagnetismo) que algunos de estos materiales presentan, en busca de su correlación con sus propiedades catalíticas, cuando sea factible. Se estudiaran las condiciones óptimas de síntesis de los materiales, aplicando técnicas hidrotermicas o sol gel, controlando variables como temperaturas y tiempos de síntesis, pH de geles iniciales-intermedios-finales, tipo de fuentes precursoras, etc. La modificación de las matrices con Co, Cr, Mn, H, o Zn, se realizara mediante diversos tratamientos químicos (intercambio, impregnación) a partir de las sales correspondientes, con el objeto de incorporar elementos activos al estado iónico, metálico, clusters, etc.; y la influencia de distintos tratamientos térmicos (oxidantes, inertes o reductores; atmósferas dinámicas o estáticas; temperaturas). La caracterización estructural de los materiales será por: AA (cuantificación elemental de bulk); XRD (determinacion de presencia de especies oxidos o metalicas de Zn, Co, Cr, o Mn; determinacion de cristalinidad y estructura); BET (determinacion de area superficial); DSC-TG-DTA (determinacion de estabilidad de las matrices sintetizadas); FTIR de piridina (determinacion de tipo-fuerza-cantidad de sitios activos); Raman y UV-reflectancia difusa (determinacion de especies ionicas interacturando o depositadas sobre las matrices); TPR (identificacion de especies reducibles); SEM-EDAX (determinacion de tamaño de particulas de especies activas y de las matrices y cuanfiticacion superficial); Magnetómetros SQUID y de muestra vibrante (medición de magnetización y susceptibilidad magnética a temperatura ambiente con variación de campo externo aplicado, y variación de temperaturas (4 a 300 K) con campo externo fijo). En síntesis, se plantean tres grandes áreas de trabajo: No1)Síntesis y caracterización de materiales micro y mesoporosos nanoestructurados; No2) Evaluación de las propiedades catalíticas; No3) Evaluación de las propiedades magnéticas. Estos lineamientos nos permitirán generar nuevos conocimientos científicos-tecnológicos, formando recursos humanos (dos becarios posdoctorales; un becario doctoral; tres becarios alumnos de investigación; aproximadamente 15 pasantes de grado al año) aptos para emprender tales desafíos. Los conocimientos originados son constantemente trabajados en las actividades docentes de grado y posgrado que los integrantes del proyecto poseen. Finalmente serán transmitidos y puestos a consideración de pares evaluadores en presentaciones a congresos nacionales e internacionales y revistas especializadas.
Resumo:
El avance en la potencia de cómputo en nuestros días viene dado por la paralelización del procesamiento, dadas las características que disponen las nuevas arquitecturas de hardware. Utilizar convenientemente este hardware impacta en la aceleración de los algoritmos en ejecución (programas). Sin embargo, convertir de forma adecuada el algoritmo en su forma paralela es complejo, y a su vez, esta forma, es específica para cada tipo de hardware paralelo. En la actualidad los procesadores de uso general más comunes son los multicore, procesadores paralelos, también denominados Symmetric Multi-Processors (SMP). Hoy en día es difícil hallar un procesador para computadoras de escritorio que no tengan algún tipo de paralelismo del caracterizado por los SMP, siendo la tendencia de desarrollo, que cada día nos encontremos con procesadores con mayor numero de cores disponibles. Por otro lado, los dispositivos de procesamiento de video (Graphics Processor Units - GPU), a su vez, han ido desarrollando su potencia de cómputo por medio de disponer de múltiples unidades de procesamiento dentro de su composición electrónica, a tal punto que en la actualidad no es difícil encontrar placas de GPU con capacidad de 200 a 400 hilos de procesamiento paralelo. Estos procesadores son muy veloces y específicos para la tarea que fueron desarrollados, principalmente el procesamiento de video. Sin embargo, como este tipo de procesadores tiene muchos puntos en común con el procesamiento científico, estos dispositivos han ido reorientándose con el nombre de General Processing Graphics Processor Unit (GPGPU). A diferencia de los procesadores SMP señalados anteriormente, las GPGPU no son de propósito general y tienen sus complicaciones para uso general debido al límite en la cantidad de memoria que cada placa puede disponer y al tipo de procesamiento paralelo que debe realizar para poder ser productiva su utilización. Los dispositivos de lógica programable, FPGA, son dispositivos capaces de realizar grandes cantidades de operaciones en paralelo, por lo que pueden ser usados para la implementación de algoritmos específicos, aprovechando el paralelismo que estas ofrecen. Su inconveniente viene derivado de la complejidad para la programación y el testing del algoritmo instanciado en el dispositivo. Ante esta diversidad de procesadores paralelos, el objetivo de nuestro trabajo está enfocado en analizar las características especificas que cada uno de estos tienen, y su impacto en la estructura de los algoritmos para que su utilización pueda obtener rendimientos de procesamiento acordes al número de recursos utilizados y combinarlos de forma tal que su complementación sea benéfica. Específicamente, partiendo desde las características del hardware, determinar las propiedades que el algoritmo paralelo debe tener para poder ser acelerado. Las características de los algoritmos paralelos determinará a su vez cuál de estos nuevos tipos de hardware son los mas adecuados para su instanciación. En particular serán tenidos en cuenta el nivel de dependencia de datos, la necesidad de realizar sincronizaciones durante el procesamiento paralelo, el tamaño de datos a procesar y la complejidad de la programación paralela en cada tipo de hardware. Today´s advances in high-performance computing are driven by parallel processing capabilities of available hardware architectures. These architectures enable the acceleration of algorithms when thes ealgorithms are properly parallelized and exploit the specific processing power of the underneath architecture. Most current processors are targeted for general pruposes and integrate several processor cores on a single chip, resulting in what is known as a Symmetric Multiprocessing (SMP) unit. Nowadays even desktop computers make use of multicore processors. Meanwhile, the industry trend is to increase the number of integrated rocessor cores as technology matures. On the other hand, Graphics Processor Units (GPU), originally designed to handle only video processing, have emerged as interesting alternatives to implement algorithm acceleration. Current available GPUs are able to implement from 200 to 400 threads for parallel processing. Scientific computing can be implemented in these hardware thanks to the programability of new GPUs that have been denoted as General Processing Graphics Processor Units (GPGPU).However, GPGPU offer little memory with respect to that available for general-prupose processors; thus, the implementation of algorithms need to be addressed carefully. Finally, Field Programmable Gate Arrays (FPGA) are programmable devices which can implement hardware logic with low latency, high parallelism and deep pipelines. Thes devices can be used to implement specific algorithms that need to run at very high speeds. However, their programmability is harder that software approaches and debugging is typically time-consuming. In this context where several alternatives for speeding up algorithms are available, our work aims at determining the main features of thes architectures and developing the required know-how to accelerate algorithm execution on them. We look at identifying those algorithms that may fit better on a given architecture as well as compleme
Resumo:
The purpose of this research is to examine the main economic, legislative, and socio- cultural factors that are currently influencing the pub trade in Ireland and their specific impact on a sample of publicans in both Galway city and county. In approaching this task the author engaged in a comprehensive literature review on the origin, history and evolution of the Irish pub; examined the socio-cultural and economic role of the public house in Ireland and developed a profile of the Irish pub by undertaking a number of semi-structured interviews with pub owners from the area. In doing so, the author obtained the views and opinions of the publicans on the current state of their businesses, the extent to which patterns of trade have changed over recent years, the challenges and factors currently influencing their trade, the actions they believed to be necessary to promote the trade and address perceived difficulties and how they viewed the future of the pub business within the framework of the current regulatory regime. In light of this research, the author identified a number of key findings and put forward a series of recommendations designed to promote the future success and development of the pub trade in Ireland. The research established that public houses are currently operating under a very unfavourable regulatory framework that has resulted in the serious decline of the trade over the last decade. This decline appears to have coincided initially with the introduction of the ban on smoking in the workplace and was exacerbated further by the advent of more severe drink-driving laws, especially mandatory breath testing. Other unfavourable conditions include the high levels of excise duty, value added tax and local authority commercial rates. In addition to these regulatory factors, the research established that a major impediment to the pub trade is the unfair competition from supermarkets and other off-licence retail outlets and especially to the phenomenon of the below-cost selling of alcohol. The recession has also been a major contributory factor to the decline in the trade as also has been the trend towards lifestyle changes and home drinking mirroring the practice in some continental European countries.
Resumo:
As digital imaging processing techniques become increasingly used in a broad range of consumer applications, the critical need to evaluate algorithm performance has become recognised by developers as an area of vital importance. With digital image processing algorithms now playing a greater role in security and protection applications, it is of crucial importance that we are able to empirically study their performance. Apart from the field of biometrics little emphasis has been put on algorithm performance evaluation until now and where evaluation has taken place, it has been carried out in a somewhat cumbersome and unsystematic fashion, without any standardised approach. This paper presents a comprehensive testing methodology and framework aimed towards automating the evaluation of image processing algorithms. Ultimately, the test framework aims to shorten the algorithm development life cycle by helping to identify algorithm performance problems quickly and more efficiently.
Resumo:
Finfish pots have emerged as a “responsible” gear, when used in combination with conservational and technical measures to sustain fisheries. Previous trials in Irish waters have offered no published reported data and so three designs tested in the current study provide new information on this gear. The most successful traps in terms of fish catch were rigid steel framed rectangular pots used to target Conger eel. Although commercial yield was low (0.2 per trap haul), potential existed for a viable pot fishery. Deployment and storage of Norwegian floating pots was conducted with relative ease but performance in the water was poor resulting in loss of gear. Catch returns were notable even though effort was restricted as mega-faunal by-catch was a problem, which lead to ending this trial. From these initial trials it was evident that catch rates were low compared to established Norwegian fisheries (3.6 cod per pot), which resulted in the utilisation of pots, already established in the crustacean fishery, to find species readily accessible to pot capture. Although fished and designed differently, these gears provided an opportunity to establish the benefits of pot fishing to fish quality and to determine the effects on by-catch. The fishing effects of three catching methods (pots, angling and trawl) and the effects of air exposure on the physiological status of a common by-catch, the lesser spotted dogfish Scyliorhinus canícula (L.) were examined using a range of physiological biomarkers (plasma catecholamine, glucose, lactate, muscle pH and muscle lactate). Physiological responses of fish to an emersion stress regime resulted in a significant metabolic disturbance in groups, but may not have weakened the overall health of these fish, as signified in the revival of some metabolites. Plasma glucose and lactate concentrations did not however recovery to baseline levels indicating that to achieve an accurate profile, responses should be determined by a suite of biomarkers. Responses did not demonstrate that samples from the pots were significantly less stressed than for the other two methods; angling and trawling, which are in contrast to many other studies. Employment of finfish potting therefore in Irish waters needs further consideration before further promotion as a more responsible method to supplement or replace established techniques.
Resumo:
This study analyses the area of construction and demolition waste (C & D W) auditing. The production of C&DW has grown year after year since the Environmental Protection Agency (EPA) first published a report in 1996 which provided data for C&D W quantities for 1995 (EPA, 1996a). The most recent report produced by the EPA is based on data for 2005 (EPA, 2006). This report estimated that the quantity of C&DW produced for that period to be 14 931 486 tonnes. However, this is a ‘data update’ report containing an update on certain waste statistics so any total provided would not be a true reflection of the waste produced for that period. This illustrates that a more construction site-specific form of data is required. The Department of Building and Civil Engineering in the Galway-Mayo Institute of Technology have carried out two recent research projects (Grimes, 2005; Kelly, 2006) in this area, which have produced waste production indicators based on site-specific data. This involved the design and testing of an original auditing tool based on visual characterisation and the application of conversion factors. One of the main recommendations of these studies was to compare this visual characterisation approach with a photogrammetric sorting methodology. This study investigates the application of photogrammetric sorting on a residential construction site in the Galway region. A visual characterisation study is also carried out on the same project to compare the two methodologies and assess the practical application in a construction site environment. Data collected from the waste management contractor on site was also used to provide further evaluation. From this, a set of waste production indicators for new residential construction was produced: □ 50.8 kg/m2 for new residential construction using data provided by the visual characterisation method and the Landfill Levy conversion factors. □ 43 kg/m2 for new residential construction using data provided by the photogrammetric sorting method and the Landfill Levy conversion factors. □ 23.8 kg/m2 for new residential construction using data provided by Waste Management Contractor (WMC). The acquisition of the data from the waste management contractor was a key element for testing of the information produced by the visual characterisation and photogrammetric sorting methods. The actual weight provided by the waste management contractor shows a significant difference between the quantities provided.
Resumo:
The research described in this thesis was developed as part o f the Information Management for Green Design (IMA GREE) Project. The 1MAGREE Project was founded by Enterprise Ireland under a Strategic Research Grant Scheme as a partnership project between Galway Mayo Institute o f Technology and C1MRU University College Galway. The project aimed to develop a CAD integrated software tool to support environmental information management for design, particularly for the electronics-manufacturing sector in Ireland.
Resumo:
This is a study of a state of the art implementation of a new computer integrated testing (CIT) facility within a company that designs and manufactures transport refrigeration systems. The aim was to use state of the art hardware, software and planning procedures in the design and implementation of three CIT systems. Typical CIT system components include data acquisition (DAQ) equipment, application and analysis software, communication devices, computer-based instrumentation and computer technology. It is shown that the introduction of computer technology into the area of testing can have a major effect on such issues as efficiency, flexibility, data accuracy, test quality, data integrity and much more. Findings reaffirm how the overall area of computer integration continues to benefit any organisation, but with more recent advances in computer technology, communication methods and software capabilities, less expensive more sophisticated test solutions are now possible. This allows more organisations to benefit from the many advantages associated with CIT. Examples of computer integration test set-ups and the benefits associated with computer integration have been discussed.