736 resultados para Computer Science, theory and methods
Resumo:
In this work we applied a quantum circuit treatment to describe the nuclear spin relaxation. From the Redfield theory, we obtain a description of the quadrupolar relaxation as a computational process in a spin 3/2 system, through a model in which the environment is comprised by five qubits and three different quantum noise channels. The interaction between the environment and the spin 3/2 nuclei is described by a quantum circuit fully compatible with the Redfield theory of relaxation. Theoretical predictions are compared to experimental data, a short review of quantum channels and relaxation in NMR qubits is also present.
Resumo:
There has been great interest in deciding whether a combinatorial structure satisfies some property, or in estimating the value of some numerical function associated with this combinatorial structure, by considering only a randomly chosen substructure of sufficiently large, but constant size. These problems are called property testing and parameter testing, where a property or parameter is said to be testable if it can be estimated accurately in this way. The algorithmic appeal is evident, as, conditional on sampling, this leads to reliable constant-time randomized estimators. Our paper addresses property testing and parameter testing for permutations in a subpermutation perspective; more precisely, we investigate permutation properties and parameters that can be well approximated based on a randomly chosen subpermutation of much smaller size. In this context, we use a theory of convergence of permutation sequences developed by the present authors [C. Hoppen, Y. Kohayakawa, C.G. Moreira, R.M. Sampaio, Limits of permutation sequences through permutation regularity, Manuscript, 2010, 34pp.] to characterize testable permutation parameters along the lines of the work of Borgs et al. [C. Borgs, J. Chayes, L Lovasz, V.T. Sos, B. Szegedy, K. Vesztergombi, Graph limits and parameter testing, in: STOC`06: Proceedings of the 38th Annual ACM Symposium on Theory of Computing, ACM, New York, 2006, pp. 261-270.] in the case of graphs. Moreover, we obtain a permutation result in the direction of a famous result of Alon and Shapira [N. Alon, A. Shapira, A characterization of the (natural) graph properties testable with one-sided error, SIAM J. Comput. 37 (6) (2008) 1703-1727.] stating that every hereditary graph property is testable. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
The InteGrade project is a multi-university effort to build a novel grid computing middleware based on the opportunistic use of resources belonging to user workstations. The InteGrade middleware currently enables the execution of sequential, bag-of-tasks, and parallel applications that follow the BSP or the MPI programming models. This article presents the lessons learned over the last five years of the InteGrade development and describes the solutions achieved concerning the support for robust application execution. The contributions cover the related fields of application scheduling, execution management, and fault tolerance. We present our solutions, describing their implementation principles and evaluation through the analysis of several experimental results. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
The InteGrade middleware intends to exploit the idle time of computing resources in computer laboratories. In this work we investigate the performance of running parallel applications with communication among processors on the InteGrade grid. As costly communication on a grid can be prohibitive, we explore the so-called systolic or wavefront paradigm to design the parallel algorithms in which no global communication is used. To evaluate the InteGrade middleware we considered three parallel algorithms that solve the matrix chain product problem, the 0-1 Knapsack Problem, and the local sequence alignment problem, respectively. We show that these three applications running under the InteGrade middleware and MPI take slightly more time than the same applications running on a cluster with only LAM-MPI support. The results can be considered promising and the time difference between the two is not substantial. The overhead of the InteGrade middleware is acceptable, in view of the benefits obtained to facilitate the use of grid computing by the user. These benefits include job submission, checkpointing, security, job migration, etc. Copyright (C) 2009 John Wiley & Sons, Ltd.
Resumo:
Belief Revision deals with the problem of adding new information to a knowledge base in a consistent way. Ontology Debugging, on the other hand, aims to find the axioms in a terminological knowledge base which caused the base to become inconsistent. In this article, we propose a belief revision approach in order to find and repair inconsistencies in ontologies represented in some description logic (DL). As the usual belief revision operators cannot be directly applied to DLs, we propose new operators that can be used with more general logics and show that, in particular, they can be applied to the logics underlying OWL-DL and Lite.
Resumo:
Large-scale simulations of parts of the brain using detailed neuronal models to improve our understanding of brain functions are becoming a reality with the usage of supercomputers and large clusters. However, the high acquisition and maintenance cost of these computers, including the physical space, air conditioning, and electrical power, limits the number of simulations of this kind that scientists can perform. Modern commodity graphical cards, based on the CUDA platform, contain graphical processing units (GPUs) composed of hundreds of processors that can simultaneously execute thousands of threads and thus constitute a low-cost solution for many high-performance computing applications. In this work, we present a CUDA algorithm that enables the execution, on multiple GPUs, of simulations of large-scale networks composed of biologically realistic Hodgkin-Huxley neurons. The algorithm represents each neuron as a CUDA thread, which solves the set of coupled differential equations that model each neuron. Communication among neurons located in different GPUs is coordinated by the CPU. We obtained speedups of 40 for the simulation of 200k neurons that received random external input and speedups of 9 for a network with 200k neurons and 20M neuronal connections, in a single computer with two graphic boards with two GPUs each, when compared with a modern quad-core CPU. Copyright (C) 2010 John Wiley & Sons, Ltd.
Resumo:
Esta dissertação estuda a propagação de crises sobre o sistema financeiro. Mais especi- ficamente, busca-se desenvolver modelos que permitam simular como um determinado choque econômico atinge determinados agentes do sistema financeiro e apartir dele se propagam, transformando-se em um problema sistêmico. A dissertação é dividida em dois capítulos,além da introdução. O primeiro capítulo desenvolve um modelo de propa- gação de crises em fundos de investimento baseado em ciência das redes.Combinando dois modelos de propagação em redes financeiras, um simulando a propagação de perdas em redes bipartites de ativos e agentes financeiros e o outro simulando a propagação de perdas em uma rede de investimentos diretos em quotas de outros agentes, desenvolve-se um algoritmo para simular a propagação de perdas através de ambos os mecanismos e utiliza-se este algoritmo para simular uma crise no mercado brasileiro de fundos de investimento. No capítulo 2,desenvolve-se um modelo de simulação baseado em agentes, com agentes financeiros, para simular propagação de um choque que afeta o mercado de operações compromissadas.Criamos também um mercado artificial composto por bancos, hedge funds e fundos de curto prazo e simulamos a propagação de um choque de liquidez sobre um ativo de risco securitizando utilizado para colateralizar operações compromissadas dos bancos.
Resumo:
The recipe used to compute the symmetric energy-momentum tensor in the framework of ordinary field theory bears little resemblance to that used in the context of general relativity, if any. We show that if one stal ts fi om the field equations instead of the Lagrangian density, one obtains a unified algorithm for computing the symmetric energy-momentum tensor in the sense that it can be used for both usual field theory and general relativity.
Resumo:
This paper describes an innovative approach to develop the understanding about the relevance of mathematics to computer science. The mathematical subjects are introduced through an application-to-model scheme that lead computer science students to a better understanding of why they have to learn math and learn it effectively. Our approach consists of a single one semester course, taught at the first semester of the program, where the students are initially exposed to some typical computer applications. When they recognize the applications' complexity, the instructor gives the mathematical models supporting such applications, even before a formal introduction to the model in a math course. We applied this approach at Unesp (Brazil) and the results include a large reduction in the rate of students that abandon the college and better students in the final years of our program.
Resumo:
This paper describes an innovative approach to establish a CS curriculum, aiming flexibility and minimization of the time spent in the classrooms. This approach has been developed at the Paulista State University - Unesp - at São José do Rio Preto, and is producing very interesting results. The load reduction is achieved through a series of fundamental core and breadth courses that precede depth courses in specific areas. The flexibility comes as a side effect of the depth courses, which can be adapted without any changes in the core courses. In the following pages we fully describe our motivations, actions and results.
Resumo:
Pós-graduação em Física - IFT
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
The design of a network is a solution to several engineering and science problems. Several network design problems are known to be NP-hard, and population-based metaheuristics like evolutionary algorithms (EAs) have been largely investigated for such problems. Such optimization methods simultaneously generate a large number of potential solutions to investigate the search space in breadth and, consequently, to avoid local optima. Obtaining a potential solution usually involves the construction and maintenance of several spanning trees, or more generally, spanning forests. To efficiently explore the search space, special data structures have been developed to provide operations that manipulate a set of spanning trees (population). For a tree with n nodes, the most efficient data structures available in the literature require time O(n) to generate a new spanning tree that modifies an existing one and to store the new solution. We propose a new data structure, called node-depth-degree representation (NDDR), and we demonstrate that using this encoding, generating a new spanning forest requires average time O(root n). Experiments with an EA based on NDDR applied to large-scale instances of the degree-constrained minimum spanning tree problem have shown that the implementation adds small constants and lower order terms to the theoretical bound.