852 resultados para User interfaces (Computer systems)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Malicious programs (malware) can cause severe damage on computer systems and data. The mechanism that the human immune system uses to detect and protect from organisms that threaten the human body is efficient and can be adapted to detect malware attacks. In this paper we propose a system to perform malware distributed collection, analysis and detection, this last inspired by the human immune system. After collecting malware samples from Internet, they are dynamically analyzed so as to provide execution traces at the operating system level and network flows that are used to create a behavioral model and to generate a detection signature. Those signatures serve as input to a malware detector, acting as the antibodies in the antigen detection process. This allows us to understand the malware attack and aids in the infection removal procedures. © 2012 Springer-Verlag.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Simulation of large and complex systems, such as computing grids, is a difficult task. Current simulators, despite providing accurate results, are significantly hard to use. They usually demand a strong knowledge of programming, what is not a standard pattern in today's users of grids and high performance computing. The need for computer expertise prevents these users from simulating how the environment will respond to their applications, what may imply in large loss of efficiency, wasting precious computational resources. In this paper we introduce iSPD, iconic Simulator of Parallel and Distributed Systems, which is a simulator where grid models are produced through an iconic interface. We describe the simulator and its intermediate model languages. Results presented here provide an insight in its easy-of-use and accuracy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Transactional memory (TM) is a new synchronization mechanism devised to simplify parallel programming, thereby helping programmers to unleash the power of current multicore processors. Although software implementations of TM (STM) have been extensively analyzed in terms of runtime performance, little attention has been paid to an equally important constraint faced by nearly all computer systems: energy consumption. In this work we conduct a comprehensive study of energy and runtime tradeoff sin software transactional memory systems. We characterize the behavior of three state-of-the-art lock-based STM algorithms, along with three different conflict resolution schemes. As a result of this characterization, we propose a DVFS-based technique that can be integrated into the resolution policies so as to improve the energy-delay product (EDP). Experimental results show that our DVFS-enhanced policies are indeed beneficial for applications with high contention levels. Improvements of up to 59% in EDP can be observed in this scenario, with an average EDP reduction of 16% across the STAMP workloads. © 2012 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pós-graduação em Ciência da Computação - IBILCE

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pós-graduação em Engenharia Elétrica - FEIS

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pós-graduação em Engenharia Mecânica - FEG

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Neste tutorial apresentamos uma revisão da deconvolução de Euler que consiste de três partes. Na primeira parte, recordamos o papel da clássica formulação da deconvolução de Euler 2D e 3D como um método para localizar automaticamente fontes de campos potenciais anômalas e apontamos as dificuldades desta formulação: a presença de uma indesejável nuvem de soluções, o critério empírico usado para determinar o índice estrutural (um parâmetro relacionado com a natureza da fonte anômala), a exeqüibilidade da aplicação da deconvolução de Euler a levantamentos magnéticos terrestres, e a determinação do mergulho e do contraste de susceptibilidade magnética de contatos geológicos (ou o produto do contraste de susceptibilidade e a espessura quando aplicado a dique fino). Na segunda parte, apresentamos as recentes melhorias objetivando minimizar algumas dificuldades apresentadas na primeira parte deste tutorial. Entre estas melhorias incluem-se: i) a seleção das soluções essencialmente associadas com observações apresentando alta razão sinal-ruído; ii) o uso da correlação entre a estimativa do nível de base da anomalia e a própria anomalia observada ou a combinação da deconvolução de Euler com o sinal analítico para determinação do índice estrutural; iii) a combinação dos resultados de (i) e (ii), permitindo estimar o índice estrutural independentemente do número de soluções; desta forma, um menor número de observações (tal como em levantamentos terrestres) pode ser usado; iv) a introdução de equações adicionais independentes da equação de Euler que permitem estimar o mergulho e o contraste de susceptibilidade das fontes magnéticas 2D. Na terceira parte apresentaremos um prognóstico sobre futuros desenvolvimentos a curto e médio prazo envolvendo a deconvolução de Euler. As principais perspectivas são: i) novos ataques aos problemas selecionados na segunda parte deste tutorial; ii) desenvolvimento de métodos que permitam considerar interferências de fontes localizadas ao lado ou acima da fonte principal, e iii) uso das estimativas de localização da fonte anômala produzidas pela deconvolução de Euler como vínculos em métodos de inversão para obter a delineação das fontes em um ambiente computacional amigável.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pós-graduação em Ciência da Computação - IBILCE

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Huge image collections are becoming available lately. In this scenario, the use of Content-Based Image Retrieval (CBIR) systems has emerged as a promising approach to support image searches. The objective of CBIR systems is to retrieve the most similar images in a collection, given a query image, by taking into account image visual properties such as texture, color, and shape. In these systems, the effectiveness of the retrieval process depends heavily on the accuracy of ranking approaches. Recently, re-ranking approaches have been proposed to improve the effectiveness of CBIR systems by taking into account the relationships among images. The re-ranking approaches consider the relationships among all images in a given dataset. These approaches typically demands a huge amount of computational power, which hampers its use in practical situations. On the other hand, these methods can be massively parallelized. In this paper, we propose to speedup the computation of the RL-Sim algorithm, a recently proposed image re-ranking approach, by using the computational power of Graphics Processing Units (GPU). GPUs are emerging as relatively inexpensive parallel processors that are becoming available on a wide range of computer systems. We address the image re-ranking performance challenges by proposing a parallel solution designed to fit the computational model of GPUs. We conducted an experimental evaluation considering different implementations and devices. Experimental results demonstrate that significant performance gains can be obtained. Our approach achieves speedups of 7x from serial implementation considering the overall algorithm and up to 36x on its core steps.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper presents the radiometric parameters determined by the medical physicist during routine radiotherapy planning service in cases of breast cancer . The contours of the breast volume in patients undergoing radiation breast tumors at the Hospital das Clinicas, Faculty of Medicine , UNESP, Botucatu ( HCFMB ) during the year 2012 were analyzed . In order to analyze the influence of physical and radiometric parameters for the determination of the dose distribution of irradiated breast volume , four measurements of isodose curves were prepared in four different heights breast , and compared with the isodose curves plotted computationally . In the routine of planning , the medical physicist must determine the isodose curve that gives the best dose distribution homogeneity in the irradiated volume . The choice of the treatment plan can be done by dedicated computer systems , which require significantly costly investments available services having better financial support . In the Service of Medical Physics , Department of Radiotherapy , HC FMB , we use a two-dimensional software for determination of isodose curves , however , this software is out of date and frequently becomes inoperable due to the lack of maintenance and it is a closed system without feasibility of interference from computer professionals . This fact requires manual preparation of isodose curves , which are subject to uncertainties due to the subjectivity in the clinical interpretation of medical radiation oncologist and medical physicist responsible for planning , plus dispendiar significant calculation time . The choice of the optimal isodose curve depends on the energy of the radiation beam , the geometry and dimensions of the irradiated area . The contours of the breast studied in this work evaluations showed that , for a given energy input , such as the energy of 1.25 MeV of gamma radiation Unit Telecobaltoterapia , the determination of the percentage depth dose ( PDP ) ...

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pós-graduação em Ciência da Computação - IBILCE

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)