987 resultados para Treatment algorithm
Resumo:
IEEE International Symposium on Circuits and Systems, pp. 724 – 727, Seattle, EUA
Resumo:
Twenty three patients with Cutaneous Larva Migrans syndrome were prospectively treated with 400 mg/day of Albendazole for 3 consecutive days. Clinical response, compliance and tolerance was excellent. Patients were asymptomatic within the first 72 hours of treatment and recurrences did not occurred. Preliminary results with three additional patients suggest that a single oral 400 mg dose may be effective as well.
Resumo:
Although very efficient for the control of morbidity due to S. mansoni in individual patients, chemotherapy has not proven successful in the management of transmission within hyperendemic areas when used alone, even if repeated at short intervals. Consequently, a great deal of effort has been expended toward immunologic investigation and development of a specific vaccine. Based upon a study of a group of children (5-14 years) from the state of Alagoas, the author demonstrates that the outcome one year after chemotherapy depends essentially on the "risk rating" of the area of domicile. A regression analysis did not reveal significant correlation to neither age, sex or initial egg counts. Although the study was not designed to reveal individual variations in the immune status, it is postulated that putative differences in genetic make-up are irrelevant in terms of large-scale intervention. Since morbidity due to S. mansoni has substantially declined during the last two or three decades, a control policy based on vaccination can only be justified if high levels of protective immunity can be attained. At any rate, such a vaccine will have to be administered in early childhood (preferably below the age of three). It can also be demonstrated that immunization in adolescence or adulthood serves no purpose whatsoever. The author is convinced that environmental intervention, usually dismissed as unrealistic in terms of the developing countries, is not only feasible, if done on a selective basis, but prioritary.
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia Informática
Resumo:
Recent integrated circuit technologies have opened the possibility to design parallel architectures with hundreds of cores on a single chip. The design space of these parallel architectures is huge with many architectural options. Exploring the design space gets even more difficult if, beyond performance and area, we also consider extra metrics like performance and area efficiency, where the designer tries to design the architecture with the best performance per chip area and the best sustainable performance. In this paper we present an algorithm-oriented approach to design a many-core architecture. Instead of doing the design space exploration of the many core architecture based on the experimental execution results of a particular benchmark of algorithms, our approach is to make a formal analysis of the algorithms considering the main architectural aspects and to determine how each particular architectural aspect is related to the performance of the architecture when running an algorithm or set of algorithms. The architectural aspects considered include the number of cores, the local memory available in each core, the communication bandwidth between the many-core architecture and the external memory and the memory hierarchy. To exemplify the approach we did a theoretical analysis of a dense matrix multiplication algorithm and determined an equation that relates the number of execution cycles with the architectural parameters. Based on this equation a many-core architecture has been designed. The results obtained indicate that a 100 mm(2) integrated circuit design of the proposed architecture, using a 65 nm technology, is able to achieve 464 GFLOPs (double precision floating-point) for a memory bandwidth of 16 GB/s. This corresponds to a performance efficiency of 71 %. Considering a 45 nm technology, a 100 mm(2) chip attains 833 GFLOPs which corresponds to 84 % of peak performance These figures are better than those obtained by previous many-core architectures, except for the area efficiency which is limited by the lower memory bandwidth considered. The results achieved are also better than those of previous state-of-the-art many-cores architectures designed specifically to achieve high performance for matrix multiplication.
Resumo:
An adaptive antenna array combines the signal of each element, using some constraints to produce the radiation pattern of the antenna, while maximizing the performance of the system. Direction of arrival (DOA) algorithms are applied to determine the directions of impinging signals, whereas beamforming techniques are employed to determine the appropriate weights for the array elements, to create the desired pattern. In this paper, a detailed analysis of both categories of algorithms is made, when a planar antenna array is used. Several simulation results show that it is possible to point an antenna array in a desired direction based on the DOA estimation and on the beamforming algorithms. A comparison of the performance in terms of runtime and accuracy of the used algorithms is made. These characteristics are dependent on the SNR of the incoming signal.
Resumo:
In this study, the concentration probability distributions of 82 pharmaceutical compounds detected in the effluents of 179 European wastewater treatment plants were computed and inserted into a multimedia fate model. The comparative ecotoxicological impact of the direct emission of these compounds from wastewater treatment plants on freshwater ecosystems, based on a potentially affected fraction (PAF) of species approach, was assessed to rank compounds based on priority. As many pharmaceuticals are acids or bases, the multimedia fate model accounts for regressions to estimate pH-dependent fate parameters. An uncertainty analysis was performed by means of Monte Carlo analysis, which included the uncertainty of fate and ecotoxicity model input variables, as well as the spatial variability of landscape characteristics on the European continental scale. Several pharmaceutical compounds were identified as being of greatest concern, including 7 analgesics/anti-inflammatories, 3 β-blockers, 3 psychiatric drugs, and 1 each of 6 other therapeutic classes. The fate and impact modelling relied extensively on estimated data, given that most of these compounds have little or no experimental fate or ecotoxicity data available, as well as a limited reported occurrence in effluents. The contribution of estimated model input variables to the variance of freshwater ecotoxicity impact, as well as the lack of experimental abiotic degradation data for most compounds, helped in establishing priorities for further testing. Generally, the effluent concentration and the ecotoxicity effect factor were the model input variables with the most significant effect on the uncertainty of output results.
Resumo:
This clinical trial compared parasitological efficacy, levels of in vivo resistance and side effects of oral chloroquine 25 mg/Kg and 50 mg/Kg in 3 days treatment in Plasmodium falciparum malaria with an extended followed-up of 30 days. The study enroled 58 patients in the 25 mg/Kg group and 66 in the 50 mg/Kg group. All eligible subjects were over 14 years of age and came from Amazon Basin and Central Brazil during the period of August 1989 to April 1991. The cure rate in the 50 mg/Kg group was 89.4% on day 7 and 71.2% on day 14 compared to 44.8% and 24.1% in the 25 mg/Kg group. 74.1% of the patients in the 25 mg/Kg group and 48.4% of the patients in the 50 mg/Kg group had detectable parasitaemia at the day 30. However, there was a decrease of the geometric mean parasite density in both groups specially in the 50 mg/Kg group. There was 24.1% of RIII and 13.8% of RH in the 25 mg/Kg group. Side effects were found to be minimum in both groups. The present data support that there was a high level resistance to chloroquine in both groups, and the high dose regimen only delayed the development of resistance and its administration should not be recommended as first choice in malaria P. falciparum therapy in Brazil.
Resumo:
The container loading problem (CLP) is a combinatorial optimization problem for the spatial arrangement of cargo inside containers so as to maximize the usage of space. The algorithms for this problem are of limited practical applicability if real-world constraints are not considered, one of the most important of which is deemed to be stability. This paper addresses static stability, as opposed to dynamic stability, looking at the stability of the cargo during container loading. This paper proposes two algorithms. The first is a static stability algorithm based on static mechanical equilibrium conditions that can be used as a stability evaluation function embedded in CLP algorithms (e.g. constructive heuristics, metaheuristics). The second proposed algorithm is a physical packing sequence algorithm that, given a container loading arrangement, generates the actual sequence by which each box is placed inside the container, considering static stability and loading operation efficiency constraints.
Resumo:
“Many-core” systems based on a Network-on-Chip (NoC) architecture offer various opportunities in terms of performance and computing capabilities, but at the same time they pose many challenges for the deployment of real-time systems, which must fulfill specific timing requirements at runtime. It is therefore essential to identify, at design time, the parameters that have an impact on the execution time of the tasks deployed on these systems and the upper bounds on the other key parameters. The focus of this work is to determine an upper bound on the traversal time of a packet when it is transmitted over the NoC infrastructure. Towards this aim, we first identify and explore some limitations in the existing recursive-calculus-based approaches to compute the Worst-Case Traversal Time (WCTT) of a packet. Then, we extend the existing model by integrating the characteristics of the tasks that generate the packets. For this extended model, we propose an algorithm called “Branch and Prune” (BP). Our proposed method provides tighter and safe estimates than the existing recursive-calculus-based approaches. Finally, we introduce a more general approach, namely “Branch, Prune and Collapse” (BPC) which offers a configurable parameter that provides a flexible trade-off between the computational complexity and the tightness of the computed estimate. The recursive-calculus methods and BP present two special cases of BPC when a trade-off parameter is 1 or ∞, respectively. Through simulations, we analyze this trade-off, reason about the implications of certain choices, and also provide some case studies to observe the impact of task parameters on the WCTT estimates.
Resumo:
We report the first case of African histoplasmosis diagnosed in Brazil. The patient was an immigrant from Angola who had come to Brazil six months after the appearance of the skin lesion. The skin of the right retroauricular area was the only site of involvement. The diagnosis was established by direct mycologic examination, culture and by histopathologic examination of the lesion. The patient was successfully treated with Itraconazole 100mg a day for 52 days. No recurrent skin lesions were observed during the ten month follow-up period.
Resumo:
This paper presents a new parallel implementation of a previously hyperspectral coded aperture (HYCA) algorithm for compressive sensing on graphics processing units (GPUs). HYCA method combines the ideas of spectral unmixing and compressive sensing exploiting the high spatial correlation that can be observed in the data and the generally low number of endmembers needed in order to explain the data. The proposed implementation exploits the GPU architecture at low level, thus taking full advantage of the computational power of GPUs using shared memory and coalesced accesses to memory. The proposed algorithm is evaluated not only in terms of reconstruction error but also in terms of computational performance using two different GPU architectures by NVIDIA: GeForce GTX 590 and GeForce GTX TITAN. Experimental results using real data reveals signficant speedups up with regards to serial implementation.
Resumo:
The concerns on metals in urban wastewater treatment plants (WWTPs) are mainly related to its contents in discharges to environment, namely in the final effluent and in the sludge produced. In the near future, more restrictive limits will be imposed to final effluents, due to the recent guidelines of the European Water Framework Directive (EUWFD). Concerning the sludge, at least seven metals (Cd, Cr, Cu, Hg, Ni, Pb and Zn) have been regulated in different countries, four of which were classified by EUWFD as priority substances and two of which were also classified as hazardous substances. Although WWTPs are not designed to remove metals, the study of metals behaviour in these systems is a crucial issue to develop predictive models that can help more effectively the regulation of pre-treatment requirements and contribute to optimize the systems to get more acceptable metal concentrations in its discharges. Relevant data have been published in the literature in recent decades concerning the occurrence/fate/behaviour of metals in WWTPs. However, the information is dispersed and not standardized in terms of parameters for comparing results. This work provides a critical review on this issue through a careful systematization, in tables and graphs, of the results reported in the literature, which allows its comparison and so its analysis, in order to conclude about the state of the art in this field. A summary of the main consensus, divergences and constraints found, as well as some recommendations, is presented as conclusions, aiming to contribute to a more concerted action of future research. © 2015, Islamic Azad University (IAU).