971 resultados para Parallel design multicenter
Resumo:
Objective: The aim of the study was to evaluate clinical and laboratorial features of 1234 patients with different etiologies of hyper-prolactinemia, as well as the response of 388 patients with prolactinomas to dopamine agonists. Design, setting, and patients: A total of 1234 hyperprolactinemic patients from 10 Brazilian endocrine centers were enrolled in this retrospective study. Main outcome measure: PRL measurement, thyroid function tests, and screening for macroprolactin were conducted. Results: Patients were subdivided as follows: 56.2% had prolactinomas, 14.5% drug-induced hyperprolactinemia, 9.3% macroprolactinemia, 6.6% non-functioning pituitary adenomas, 6.3% primary hypothyroidism, 3.6% idiopathic hyperprolactinemia, and 3.2% acromegaly. Clinical manifestations were similar irrespective of the etiology of the hyperprolactinemia. The highest PRL levels were observed in patients with prolactinomas but there was a great overlap in PRL values between all groups. However, PRL>500 ng/ml allowed a clear distinction between prolactinomas and the other etiologies. Cabergoline (CAB) was more effective than bromocriptine (BCR) in normalizing PRL levels (81.9% vs 67.1%, p<0.0001) and in inducing significant tumor shrinkage and complete disappearance of tumor mass. Drug resistance was observed in 10% of patients treated with CAB and in 18.4% of those that used BCR (p=0.0006). Side-effects and intolerance were also more common in BCR-treated patients. Conclusion: Prolactinomas, drug-induced hyperprolactinemia, and macroprolactinemia were the 3 most common causes of hyperprolactinemia. Although PRL levels could not reliably define the etiology of hyperprolactinemia, PRL values >500 ng/ml were exclusively seen in patients with prolactinomas. CAB was significantly more effective than BCR in terms of prolactin normalization, tumor shrinkage, and tolerability.
Resumo:
Background Benznidazole is effective for treating acute and chronic (recently acquired) Tryponosoma cruzi infection (Chagas` disease). Recent data indicate that parasite persistence plays a pivotal role in the pathogenesis of chronic Chagas` cardiomyopathy. However, the efficacy of trypanocidal therapy in preventing clinical complications in patients with preexisting cardiac disease is unknown. Study Design BENEFIT is a multicenter, randomized, double-blind, placebo-controlled clinical trial of 3,000 patients with Chagas` cardiomyopathy in Latin America. Patients are randomized to receive benznidazole (5 mg/kg per day) or matched placebo, for 60 days. The primary outcome is the composite of death; resuscitated cardiac arrest; sustained ventricular tachycardia; insertion of pacemaker or cardiac defibrillator; cardiac transplantation; and development of new heart failure, stroke, or systemic or pulmonary thromboembolic events. The average follow-up time will be 5 years, and the trial has a 90% power to detect a 25% relative risk reduction. The BENEFIT program also comprises a substudy evaluating the effects of benznidazole on parasite clearance and an echo substudy exploring the impact of etiologic treatment on left ventricular function. Recruitment started in November 2004, and >1,000 patients have been enrolled in 35 centers from Argentina, Brazil, and Colombia to date. Conclusion This is the largest trial yet conducted in Chagas` disease. BENEFIT will clarify the role of trypanocidal therapy in preventing cardiac disease progression and death.
Resumo:
Multicore platforms have transformed parallelism into a main concern. Parallel programming models are being put forward to provide a better approach for application programmers to expose the opportunities for parallelism by pointing out potentially parallel regions within tasks, leaving the actual and dynamic scheduling of these regions onto processors to be performed at runtime, exploiting the maximum amount of parallelism. It is in this context that this paper proposes a scheduling approach that combines the constant-bandwidth server abstraction with a priority-aware work-stealing load balancing scheme which, while ensuring isolation among tasks, enables parallel tasks to be executed on more than one processor at a given time instant.
Resumo:
This letter presents a new parallel method for hyperspectral unmixing composed by the efficient combination of two popular methods: vertex component analysis (VCA) and sparse unmixing by variable splitting and augmented Lagrangian (SUNSAL). First, VCA extracts the endmember signatures, and then, SUNSAL is used to estimate the abundance fractions. Both techniques are highly parallelizable, which significantly reduces the computing time. A design for the commodity graphics processing units of the two methods is presented and evaluated. Experimental results obtained for simulated and real hyperspectral data sets reveal speedups up to 100 times, which grants real-time response required by many remotely sensed hyperspectral applications.
Resumo:
Dissertação apresentada para a obtenção do Grau de Doutor em Informática pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia.
Resumo:
IEEE International Symposium on Circuits and Systems, pp. 724 – 727, Seattle, EUA
Resumo:
Recent integrated circuit technologies have opened the possibility to design parallel architectures with hundreds of cores on a single chip. The design space of these parallel architectures is huge with many architectural options. Exploring the design space gets even more difficult if, beyond performance and area, we also consider extra metrics like performance and area efficiency, where the designer tries to design the architecture with the best performance per chip area and the best sustainable performance. In this paper we present an algorithm-oriented approach to design a many-core architecture. Instead of doing the design space exploration of the many core architecture based on the experimental execution results of a particular benchmark of algorithms, our approach is to make a formal analysis of the algorithms considering the main architectural aspects and to determine how each particular architectural aspect is related to the performance of the architecture when running an algorithm or set of algorithms. The architectural aspects considered include the number of cores, the local memory available in each core, the communication bandwidth between the many-core architecture and the external memory and the memory hierarchy. To exemplify the approach we did a theoretical analysis of a dense matrix multiplication algorithm and determined an equation that relates the number of execution cycles with the architectural parameters. Based on this equation a many-core architecture has been designed. The results obtained indicate that a 100 mm(2) integrated circuit design of the proposed architecture, using a 65 nm technology, is able to achieve 464 GFLOPs (double precision floating-point) for a memory bandwidth of 16 GB/s. This corresponds to a performance efficiency of 71 %. Considering a 45 nm technology, a 100 mm(2) chip attains 833 GFLOPs which corresponds to 84 % of peak performance These figures are better than those obtained by previous many-core architectures, except for the area efficiency which is limited by the lower memory bandwidth considered. The results achieved are also better than those of previous state-of-the-art many-cores architectures designed specifically to achieve high performance for matrix multiplication.
Resumo:
Single processor architectures are unable to provide the required performance of high performance embedded systems. Parallel processing based on general-purpose processors can achieve these performances with a considerable increase of required resources. However, in many cases, simplified optimized parallel cores can be used instead of general-purpose processors achieving better performance at lower resource utilization. In this paper, we propose a configurable many-core architecture to serve as a co-processor for high-performance embedded computing on Field-Programmable Gate Arrays. The architecture consists of an array of configurable simple cores with support for floating-point operations interconnected with a configurable interconnection network. For each core it is possible to configure the size of the internal memory, the supported operations and number of interfacing ports. The architecture was tested in a ZYNQ-7020 FPGA in the execution of several parallel algorithms. The results show that the proposed many-core architecture achieves better performance than that achieved with a parallel generalpurpose processor and that up to 32 floating-point cores can be implemented in a ZYNQ-7020 SoC FPGA.
Resumo:
Euromicro Conference on Digital System Design (DSD 2015), Funchal, Portugal.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Electrotécnica e de Computadores
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Biomédica
Resumo:
In this work, the optimization of an extrusion die designed for the production of a wood–plastic composite (WPC) decking profile is investigated. The optimization was performed with the help of numerical tools, more precisely, by solving the continuity and momentum conservation equations that govern such flow, and aiming to balance properly the flow distribution at the extrusion die flow channel outlet. To capture the rheological behavior of the material, we used a Bird-Carreau model with parameters obtained from a fit to the (shear viscosity versus shearrate) experimental data, collected from rheological tests. To yield a balanced output flow, several numerical runs were performed by adjusting the flow restriction at different regions of the flow-channel parallel zone crosssection. The simulations were compared with the experimental results and an excellent qualitative agreement was obtained, allowing, in this way, to attain a good balancing of the output flow and emphasizing the advantages of using numerical tools to aid the design of profile extrusion dies.
Resumo:
"Series Title: IFIP - The International Federation for Information Processing, ISSN 1868-4238"
Resumo:
Nowadays a huge attention of the academia and research teams is attracted to the potential of the usage of the 60 GHz frequency band in the wireless communications. The use of the 60GHz frequency band offers great possibilities for wide variety of applications that are yet to be implemented. These applications also imply huge implementation challenges. Such example is building a high data rate transceiver which at the same time would have very low power consumption. In this paper we present a prototype of Single Carrier -SC transceiver system, illustrating a brief overview of the baseband design, emphasizing the most important decisions that need to be done. A brief overview of the possible approaches when implementing the equalizer, as the most complex module in the SC transceiver, is also presented. The main focus of this paper is to suggest a parallel architecture for the receiver in a Single Carrier communication system. This would provide higher data rates that the communication system canachieve, for a price of higher power consumption. The suggested architecture of such receiver is illustrated in this paper,giving the results of its implementation in comparison with its corresponding serial implementation.
Resumo:
Advances in computer memory technology justify research towards new and different views on computer organization. This paper proposes a novel memory-centric computing architecture with the goal to merge memory and processing elements in order to provide better conditions for parallelization and performance. The paper introduces the architectural concepts and afterwards shows the design and implementation of a corresponding assembler and simulator.