970 resultados para Unit testing
Resumo:
OBJECTIVE: To compare blood pressure response to dynamic exercise in hypertensive patients taking trandolapril or captopril. METHODS: We carried out a prospective, randomized, blinded study with 40 patients with primary hypertension and no other associated disease. The patients were divided into 2 groups (n=20), paired by age, sex, race, and body mass index, and underwent 2 symptom-limited exercise tests on a treadmill before and after 30 days of treatment with captopril (75 to 150 mg/day) or trandolapril (2 to 4 mg/day). RESULTS: The groups were similar prior to treatment (p<0.05), and both drugs reduced blood pressure at rest (p<0.001). During treatment, trandolapril caused a greater increase in functional capacity (+31%) than captopril (+17%; p=0.01) did, and provided better blood pressure control during exercise, observed as a reduction in the variation of systolic blood pressure/MET (trandolapril: 10.7±1.9 mmHg/U vs 7.4±1.2 mmHg/U, p=0.02; captopril: 9.1±1.4 mmHg/U vs 11.4±2.5 mmHg/U, p=0.35), a reduction in peak diastolic blood pressure (trandolapril: 116.8±3.1 mmHg vs 108.1±2.5 mmHg, p=0.003; captopril: 118.2±3.1 mmHg vs 115.8±3.3 mmHg, p=0.35), and a reduction in the interruption of the tests due to excessive elevation in blood pressure (trandolapril: 50% vs 15%, p=0.009; captopril: 50% vs 45%, p=0.32). CONCLUSION: Monotherapy with trandolapril is more effective than that with captopril to control blood pressure during exercise in hypertensive patients.
Resumo:
El avance en la potencia de cómputo en nuestros días viene dado por la paralelización del procesamiento, dadas las características que disponen las nuevas arquitecturas de hardware. Utilizar convenientemente este hardware impacta en la aceleración de los algoritmos en ejecución (programas). Sin embargo, convertir de forma adecuada el algoritmo en su forma paralela es complejo, y a su vez, esta forma, es específica para cada tipo de hardware paralelo. En la actualidad los procesadores de uso general más comunes son los multicore, procesadores paralelos, también denominados Symmetric Multi-Processors (SMP). Hoy en día es difícil hallar un procesador para computadoras de escritorio que no tengan algún tipo de paralelismo del caracterizado por los SMP, siendo la tendencia de desarrollo, que cada día nos encontremos con procesadores con mayor numero de cores disponibles. Por otro lado, los dispositivos de procesamiento de video (Graphics Processor Units - GPU), a su vez, han ido desarrollando su potencia de cómputo por medio de disponer de múltiples unidades de procesamiento dentro de su composición electrónica, a tal punto que en la actualidad no es difícil encontrar placas de GPU con capacidad de 200 a 400 hilos de procesamiento paralelo. Estos procesadores son muy veloces y específicos para la tarea que fueron desarrollados, principalmente el procesamiento de video. Sin embargo, como este tipo de procesadores tiene muchos puntos en común con el procesamiento científico, estos dispositivos han ido reorientándose con el nombre de General Processing Graphics Processor Unit (GPGPU). A diferencia de los procesadores SMP señalados anteriormente, las GPGPU no son de propósito general y tienen sus complicaciones para uso general debido al límite en la cantidad de memoria que cada placa puede disponer y al tipo de procesamiento paralelo que debe realizar para poder ser productiva su utilización. Los dispositivos de lógica programable, FPGA, son dispositivos capaces de realizar grandes cantidades de operaciones en paralelo, por lo que pueden ser usados para la implementación de algoritmos específicos, aprovechando el paralelismo que estas ofrecen. Su inconveniente viene derivado de la complejidad para la programación y el testing del algoritmo instanciado en el dispositivo. Ante esta diversidad de procesadores paralelos, el objetivo de nuestro trabajo está enfocado en analizar las características especificas que cada uno de estos tienen, y su impacto en la estructura de los algoritmos para que su utilización pueda obtener rendimientos de procesamiento acordes al número de recursos utilizados y combinarlos de forma tal que su complementación sea benéfica. Específicamente, partiendo desde las características del hardware, determinar las propiedades que el algoritmo paralelo debe tener para poder ser acelerado. Las características de los algoritmos paralelos determinará a su vez cuál de estos nuevos tipos de hardware son los mas adecuados para su instanciación. En particular serán tenidos en cuenta el nivel de dependencia de datos, la necesidad de realizar sincronizaciones durante el procesamiento paralelo, el tamaño de datos a procesar y la complejidad de la programación paralela en cada tipo de hardware. Today´s advances in high-performance computing are driven by parallel processing capabilities of available hardware architectures. These architectures enable the acceleration of algorithms when thes ealgorithms are properly parallelized and exploit the specific processing power of the underneath architecture. Most current processors are targeted for general pruposes and integrate several processor cores on a single chip, resulting in what is known as a Symmetric Multiprocessing (SMP) unit. Nowadays even desktop computers make use of multicore processors. Meanwhile, the industry trend is to increase the number of integrated rocessor cores as technology matures. On the other hand, Graphics Processor Units (GPU), originally designed to handle only video processing, have emerged as interesting alternatives to implement algorithm acceleration. Current available GPUs are able to implement from 200 to 400 threads for parallel processing. Scientific computing can be implemented in these hardware thanks to the programability of new GPUs that have been denoted as General Processing Graphics Processor Units (GPGPU).However, GPGPU offer little memory with respect to that available for general-prupose processors; thus, the implementation of algorithms need to be addressed carefully. Finally, Field Programmable Gate Arrays (FPGA) are programmable devices which can implement hardware logic with low latency, high parallelism and deep pipelines. Thes devices can be used to implement specific algorithms that need to run at very high speeds. However, their programmability is harder that software approaches and debugging is typically time-consuming. In this context where several alternatives for speeding up algorithms are available, our work aims at determining the main features of thes architectures and developing the required know-how to accelerate algorithm execution on them. We look at identifying those algorithms that may fit better on a given architecture as well as compleme
Resumo:
As digital imaging processing techniques become increasingly used in a broad range of consumer applications, the critical need to evaluate algorithm performance has become recognised by developers as an area of vital importance. With digital image processing algorithms now playing a greater role in security and protection applications, it is of crucial importance that we are able to empirically study their performance. Apart from the field of biometrics little emphasis has been put on algorithm performance evaluation until now and where evaluation has taken place, it has been carried out in a somewhat cumbersome and unsystematic fashion, without any standardised approach. This paper presents a comprehensive testing methodology and framework aimed towards automating the evaluation of image processing algorithms. Ultimately, the test framework aims to shorten the algorithm development life cycle by helping to identify algorithm performance problems quickly and more efficiently.
Resumo:
The research described in this thesis was developed as part o f the Information Management for Green Design (IMA GREE) Project. The 1MAGREE Project was founded by Enterprise Ireland under a Strategic Research Grant Scheme as a partnership project between Galway Mayo Institute o f Technology and C1MRU University College Galway. The project aimed to develop a CAD integrated software tool to support environmental information management for design, particularly for the electronics-manufacturing sector in Ireland.
Resumo:
This is a study of a state of the art implementation of a new computer integrated testing (CIT) facility within a company that designs and manufactures transport refrigeration systems. The aim was to use state of the art hardware, software and planning procedures in the design and implementation of three CIT systems. Typical CIT system components include data acquisition (DAQ) equipment, application and analysis software, communication devices, computer-based instrumentation and computer technology. It is shown that the introduction of computer technology into the area of testing can have a major effect on such issues as efficiency, flexibility, data accuracy, test quality, data integrity and much more. Findings reaffirm how the overall area of computer integration continues to benefit any organisation, but with more recent advances in computer technology, communication methods and software capabilities, less expensive more sophisticated test solutions are now possible. This allows more organisations to benefit from the many advantages associated with CIT. Examples of computer integration test set-ups and the benefits associated with computer integration have been discussed.
Resumo:
Background:Recent studies have suggested that B-type Natriuretic Peptide (BNP) is an important predictor of ischemia and death in patients with suspected acute coronary syndrome. Increased levels of BNP are seen after episodes of myocardial ischemia and may be related to future adverse events.Objectives:To determine the prognostic value of BNP for major cardiac events and to evaluate its association with ischemic myocardial perfusion scintigraphy (MPS).Methods:This study included retrospectively 125 patients admitted to the chest pain unit between 2002 and 2006, who had their BNP levels measured on admission and underwent CPM for risk stratification. BNP values were compared with the results of the MPS. The chi-square test was used for qualitative variables and the Student t test, for quantitative variables. Survival curves were adjusted using the Kaplan-Meier method and analyzed by using Cox regression. The significance level was 5%.Results:The mean age was 63.9 ± 13.8 years, and the male sex represented 51.2% of the sample. Ischemia was found in 44% of the MPS. The mean BNP level was higher in patients with ischemia compared to patients with non-ischemic MPS (188.3 ± 208.7 versus 131.8 ± 88.6; p = 0.003). A BNP level greater than 80 pg/mL was the strongest predictor of ischemia on MPS (sensitivity = 60%, specificity = 70%, accuracy = 66%, PPV = 61%, NPV = 70%), and could predict medium-term mortality (RR = 7.29, 95% CI: 0.90-58.6; p = 0.045) independently of the presence of ischemia.Conclusions:BNP levels are associated with ischemic MPS findings and adverse prognosis in patients presenting with acute chest pain to the emergency room, thus, providing important prognostic information for an unfavorable clinical outcome.
Resumo:
Abstract ST2 is a member of the interleukin-1 receptor family biomarker and circulating soluble ST2 concentrations are believed to reflect cardiovascular stress and fibrosis. Recent studies have demonstrated soluble ST2 to be a strong predictor of cardiovascular outcomes in both chronic and acute heart failure. It is a new biomarker that meets all required criteria for a useful biomarker. Of note, it adds information to natriuretic peptides (NPs) and some studies have shown it is even superior in terms of risk stratification. Since the introduction of NPs, this has been the most promising biomarker in the field of heart failure and might be particularly useful as therapy guide.
Resumo:
This work describes a test tool that allows to make performance tests of different end-to-end available bandwidth estimation algorithms along with their different implementations. The goal of such tests is to find the best-performing algorithm and its implementation and use it in congestion control mechanism for high-performance reliable transport protocols. The main idea of this paper is to describe the options which provide available bandwidth estimation mechanism for highspeed data transport protocols and to develop basic functionality of such test tool with which it will be possible to manage entities of test application on all involved testing hosts, aided by some middleware.
Resumo:
Today, usability testing in the development of software and systems is essential. A stationary usability lab offers many different possibilities in the evaluation of usability, but it reaches its limits in terms of flexibility and the experimental conditions. Mobile usability studies consider consciously outside influences, and these studies require a specially adapted approach to preparation, implementation and evaluation. Using the example of a mobile eye tracking study the difficulties and the opportunities of mobile testing are considered.
Resumo:
Magdeburg, Univ., Fak. für Verfahrens- und Systemtechnik, Diss., 2014
Resumo:
v.34:no.28(1954)
Resumo:
We present a computer-assisted analysis of combinatorial properties of the Cayley graphs of certain finitely generated groups: Given a group with a finite set of generators, we study the density of the corresponding Cayley graph, that is, the least upper bound for the average vertex degree (= number of adjacent edges) of any finite subgraph. It is known that an m-generated group is amenable if and only if the density of the corresponding Cayley graph equals to 2m. We test amenable and non-amenable groups, and also groups for which amenability is unknown. In the latter class we focus on Richard Thompson’s group F.
Resumo:
Investigación producida a partir de una estancia en la London South Bank University, Reino Unido, entre los meses de setiembre y diciembre del 2005. Se estudia el trabajo sexual en el Reino Unido desde tres perspectivas diferentes. Por una parte, se trata la historia del feminismo anglosajón respecto a sus visiones sobre la prostitución, desde una aproximación a las fuentes. Por otra parte, se plantea la situación jurídico-política. Finalmente, se presenta brevemente a las principales entidades que dan apoyo al colectivo de trabajadoras del sexo en la ciudad de Londres.
Resumo:
This paper tests the Entrepreneurial Intention Model -which is adapted from the Theory of Planned Behavior- on a sample of 533 individuals from two quite different countries: one of them European (Spain) and the other South Asian (Taiwan). A newly developed Entrepreneurial Intention Questionnaire (EIQ) has being used which tries to overcome some of the limitations of previous instruments. Structural equations techniques were used in the empirical analysis. Results are generally satisfactory, indicating that the model is probably adequate for studying entrepreneurship. Support for the model was found not only in the combined sample, but also in each of the national ones. However, some differences arose that may indicate demographic variables contribute differently to the formation of perceptions in each culture.