997 resultados para Randomized algorithm
Resumo:
Recent integrated circuit technologies have opened the possibility to design parallel architectures with hundreds of cores on a single chip. The design space of these parallel architectures is huge with many architectural options. Exploring the design space gets even more difficult if, beyond performance and area, we also consider extra metrics like performance and area efficiency, where the designer tries to design the architecture with the best performance per chip area and the best sustainable performance. In this paper we present an algorithm-oriented approach to design a many-core architecture. Instead of doing the design space exploration of the many core architecture based on the experimental execution results of a particular benchmark of algorithms, our approach is to make a formal analysis of the algorithms considering the main architectural aspects and to determine how each particular architectural aspect is related to the performance of the architecture when running an algorithm or set of algorithms. The architectural aspects considered include the number of cores, the local memory available in each core, the communication bandwidth between the many-core architecture and the external memory and the memory hierarchy. To exemplify the approach we did a theoretical analysis of a dense matrix multiplication algorithm and determined an equation that relates the number of execution cycles with the architectural parameters. Based on this equation a many-core architecture has been designed. The results obtained indicate that a 100 mm(2) integrated circuit design of the proposed architecture, using a 65 nm technology, is able to achieve 464 GFLOPs (double precision floating-point) for a memory bandwidth of 16 GB/s. This corresponds to a performance efficiency of 71 %. Considering a 45 nm technology, a 100 mm(2) chip attains 833 GFLOPs which corresponds to 84 % of peak performance These figures are better than those obtained by previous many-core architectures, except for the area efficiency which is limited by the lower memory bandwidth considered. The results achieved are also better than those of previous state-of-the-art many-cores architectures designed specifically to achieve high performance for matrix multiplication.
Resumo:
An adaptive antenna array combines the signal of each element, using some constraints to produce the radiation pattern of the antenna, while maximizing the performance of the system. Direction of arrival (DOA) algorithms are applied to determine the directions of impinging signals, whereas beamforming techniques are employed to determine the appropriate weights for the array elements, to create the desired pattern. In this paper, a detailed analysis of both categories of algorithms is made, when a planar antenna array is used. Several simulation results show that it is possible to point an antenna array in a desired direction based on the DOA estimation and on the beamforming algorithms. A comparison of the performance in terms of runtime and accuracy of the used algorithms is made. These characteristics are dependent on the SNR of the incoming signal.
Resumo:
This clinical trial compared parasitological efficacy, levels of in vivo resistance and side effects of oral chloroquine 25 mg/Kg and 50 mg/Kg in 3 days treatment in Plasmodium falciparum malaria with an extended followed-up of 30 days. The study enroled 58 patients in the 25 mg/Kg group and 66 in the 50 mg/Kg group. All eligible subjects were over 14 years of age and came from Amazon Basin and Central Brazil during the period of August 1989 to April 1991. The cure rate in the 50 mg/Kg group was 89.4% on day 7 and 71.2% on day 14 compared to 44.8% and 24.1% in the 25 mg/Kg group. 74.1% of the patients in the 25 mg/Kg group and 48.4% of the patients in the 50 mg/Kg group had detectable parasitaemia at the day 30. However, there was a decrease of the geometric mean parasite density in both groups specially in the 50 mg/Kg group. There was 24.1% of RIII and 13.8% of RH in the 25 mg/Kg group. Side effects were found to be minimum in both groups. The present data support that there was a high level resistance to chloroquine in both groups, and the high dose regimen only delayed the development of resistance and its administration should not be recommended as first choice in malaria P. falciparum therapy in Brazil.
Resumo:
The container loading problem (CLP) is a combinatorial optimization problem for the spatial arrangement of cargo inside containers so as to maximize the usage of space. The algorithms for this problem are of limited practical applicability if real-world constraints are not considered, one of the most important of which is deemed to be stability. This paper addresses static stability, as opposed to dynamic stability, looking at the stability of the cargo during container loading. This paper proposes two algorithms. The first is a static stability algorithm based on static mechanical equilibrium conditions that can be used as a stability evaluation function embedded in CLP algorithms (e.g. constructive heuristics, metaheuristics). The second proposed algorithm is a physical packing sequence algorithm that, given a container loading arrangement, generates the actual sequence by which each box is placed inside the container, considering static stability and loading operation efficiency constraints.
Resumo:
“Many-core” systems based on a Network-on-Chip (NoC) architecture offer various opportunities in terms of performance and computing capabilities, but at the same time they pose many challenges for the deployment of real-time systems, which must fulfill specific timing requirements at runtime. It is therefore essential to identify, at design time, the parameters that have an impact on the execution time of the tasks deployed on these systems and the upper bounds on the other key parameters. The focus of this work is to determine an upper bound on the traversal time of a packet when it is transmitted over the NoC infrastructure. Towards this aim, we first identify and explore some limitations in the existing recursive-calculus-based approaches to compute the Worst-Case Traversal Time (WCTT) of a packet. Then, we extend the existing model by integrating the characteristics of the tasks that generate the packets. For this extended model, we propose an algorithm called “Branch and Prune” (BP). Our proposed method provides tighter and safe estimates than the existing recursive-calculus-based approaches. Finally, we introduce a more general approach, namely “Branch, Prune and Collapse” (BPC) which offers a configurable parameter that provides a flexible trade-off between the computational complexity and the tightness of the computed estimate. The recursive-calculus methods and BP present two special cases of BPC when a trade-off parameter is 1 or ∞, respectively. Through simulations, we analyze this trade-off, reason about the implications of certain choices, and also provide some case studies to observe the impact of task parameters on the WCTT estimates.
Resumo:
This paper presents a new parallel implementation of a previously hyperspectral coded aperture (HYCA) algorithm for compressive sensing on graphics processing units (GPUs). HYCA method combines the ideas of spectral unmixing and compressive sensing exploiting the high spatial correlation that can be observed in the data and the generally low number of endmembers needed in order to explain the data. The proposed implementation exploits the GPU architecture at low level, thus taking full advantage of the computational power of GPUs using shared memory and coalesced accesses to memory. The proposed algorithm is evaluated not only in terms of reconstruction error but also in terms of computational performance using two different GPU architectures by NVIDIA: GeForce GTX 590 and GeForce GTX TITAN. Experimental results using real data reveals signficant speedups up with regards to serial implementation.
Resumo:
This paper presents a step count algorithm designed to work in real-time using low computational power. This proposal is our first step for the development of an indoor navigation system, based on Pedestrian Dead Reckoning (PDR). We present two approaches to solve this problem and compare them based in their error on step counting, as well as, the capability of their use in a real time system.
Resumo:
This paper presents an ankle mounted Inertial Navigation System (INS) used to estimate the distance traveled by a pedestrian. This distance is estimated by the number of steps given by the user. The proposed method is based on force sensors to enhance the results obtained from an INS. Experimental results have shown that, depending on the step frequency, the traveled distance error varies between 2.7% and 5.6%.
Resumo:
Background: Mammography is considered the best imaging technique for breast cancer screening, and the radiographer plays an important role in its performance. Therefore, continuing education is critical to improving the performance of these professionals and thus providing better health care services. Objective: Our goal was to develop an e-learning course on breast imaging for radiographers, assessing its efficacy , effectiveness, and user satisfaction. Methods: A stratified randomized controlled trial was performed with radiographers and radiology students who already had mammography training, using pre- and post-knowledge tests, and satisfaction questionnaires. The primary outcome was the improvement in test results (percentage of correct answers), using intention-to-treat and per-protocol analysis. Results: A total of 54 participants were assigned to the intervention (20 students plus 34 radiographers) with 53 controls (19+34). The intervention was completed by 40 participants (11+29), with 4 (2+2) discontinued interventions, and 10 (7+3) lost to follow-up. Differences in the primary outcome were found between intervention and control: 21 versus 4 percentage points (pp), P<.001. Stratified analysis showed effect in radiographers (23 pp vs 4 pp; P=.004) but was unclear in students (18 pp vs 5 pp; P=.098). Nonetheless, differences in students’ posttest results were found (88% vs 63%; P=.003), which were absent in pretest (63% vs 63%; P=.106). The per-protocol analysis showed a higher effect (26 pp vs 2 pp; P<.001), both in students (25 pp vs 3 pp; P=.004) and radiographers (27 pp vs 2 pp; P<.001). Overall, 85% were satisfied with the course, and 88% considered it successful. Conclusions: This e-learning course is effective, especially for radiographers, which highlights the need for continuing education.
Resumo:
This paper introduces a new method to blindly unmix hyperspectral data, termed dependent component analysis (DECA). This method decomposes a hyperspectral images into a collection of reflectance (or radiance) spectra of the materials present in the scene (endmember signatures) and the corresponding abundance fractions at each pixel. DECA assumes that each pixel is a linear mixture of the endmembers signatures weighted by the correspondent abundance fractions. These abudances are modeled as mixtures of Dirichlet densities, thus enforcing the constraints on abundance fractions imposed by the acquisition process, namely non-negativity and constant sum. The mixing matrix is inferred by a generalized expectation-maximization (GEM) type algorithm. This method overcomes the limitations of unmixing methods based on Independent Component Analysis (ICA) and on geometrical based approaches. The effectiveness of the proposed method is illustrated using simulated data based on U.S.G.S. laboratory spectra and real hyperspectral data collected by the AVIRIS sensor over Cuprite, Nevada.
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia Electrotécnica e de Computadores
Resumo:
The recent changes concerning the consumers’ active participation in the efficient management of load devices for one’s own interest and for the interest of the network operator, namely in the context of demand response, leads to the need for improved algorithms and tools. A continuous consumption optimization algorithm has been improved in order to better manage the shifted demand. It has been done in a simulation and user-interaction tool capable of being integrated in a multi-agent smart grid simulator already developed, and also capable of integrating several optimization algorithms to manage real and simulated loads. The case study of this paper enhances the advantages of the proposed algorithm and the benefits of using the developed simulation and user interaction tool.
Resumo:
The integration of the Smart Grid concept into the electric grid brings to the need for an active participation of small and medium players. This active participation can be achieved using decentralized decisions, in which the end consumer can manage loads regarding the Smart Grid needs. The management of loads must handle the users’ preferences, wills and needs. However, the users’ preferences, wills and needs can suffer changes when faced with exceptional events. This paper proposes the integration of exceptional events into the SCADA House Intelligent Management (SHIM) system developed by the authors, to handle machine learning issues in the domestic consumption context. An illustrative application and learning case study is provided in this paper.
Resumo:
Treatment with indinavir has been shown to result in marked decreases in viral load and increases in CD4 cell counts in HIV-infected individuals. A randomized double-blind study to evaluate the efficacy of indinavir alone (800 mg q8h), zidovidine alone (200 mg q8h) or the combination was performed to evaluate progression to AIDS. 996 antiretroviral therapy-naive patients with CD4 cell counts of 50-250/mm3 were allocated to treatment. During the trial the protocol was amended to add lamivudine to the zidovudine-containing arms. The primary endpoint was time to development of an AIDS-defining illness or death. The study was terminated after a protocol-defined interim analysis demonstrated highly significant reductions in progression to a clinical event in the indinavir-containing arms, compared to the zidovudine arm (p<0.0001). Over a median follow-up of 52 weeks (up to 99 weeks), percent reductions in hazards for the indinavir plus zidovudine and indinavir groups compared to the zidovudine group were 70% and 61%, respectively. Significant reductions in HIV RNA and increases in CD4 cell counts were also seen in the indinavir-containing groups compared to the zidovudine group. Improvement in both CD4 cell count and HIV RNA were associated with reduced risk of disease progression. All three regimens were generally well tolerated.
Resumo:
A significantly diminished antibody response to hepatitis B vaccine has been demonstrated in adults when the buttock is used as the injection site. However, in Brazil, the buttock continues to be recommended as site of injection for intramuscular administration of vaccines in infants. In this age group, there are no controlled studies evaluating the immunogenicity of the hepatitis B vaccine when administered at this site. In the present study, 258 infants were randomized to receive the hepatitis B vaccine either in the buttock (n = 123) or in the anterolateral thigh muscle (n = 135). The immunization schedule consisted of three doses of hepatitis B vaccine (Engerix Bâ, 10 mug) at 2, 4 and 9 months of age. There were no significant differences in the proportion of seroconversion (99.3% x 99.2%), or in the geometric mean titer of ELISA anti-HBs (1,862.1 x 1,229.0 mIU/mL) between the two groups. This study demonstrates that a satisfactory serological response can be obtained when the hepatitis B vaccine is administered intramuscularly into the buttock.