7 resultados para LARGE SYSTEMS
em Universidad de Alicante
Resumo:
Model Hamiltonians have been, and still are, a valuable tool for investigating the electronic structure of systems for which mean field theories work poorly. This review will concentrate on the application of Pariser–Parr–Pople (PPP) and Hubbard Hamiltonians to investigate some relevant properties of polycyclic aromatic hydrocarbons (PAH) and graphene. When presenting these two Hamiltonians we will resort to second quantisation which, although not the way chosen in its original proposal of the former, is much clearer. We will not attempt to be comprehensive, but rather our objective will be to try to provide the reader with information on what kinds of problems they will encounter and what tools they will need to solve them. One of the key issues concerning model Hamiltonians that will be treated in detail is the choice of model parameters. Although model Hamiltonians reduce the complexity of the original Hamiltonian, they cannot be solved in most cases exactly. So, we shall first consider the Hartree–Fock approximation, still the only tool for handling large systems, besides density functional theory (DFT) approaches. We proceed by discussing to what extent one may exactly solve model Hamiltonians and the Lanczos approach. We shall describe the configuration interaction (CI) method, a common technology in quantum chemistry but one rarely used to solve model Hamiltonians. In particular, we propose a variant of the Lanczos method, inspired by CI, that has the novelty of using as the seed of the Lanczos process a mean field (Hartree–Fock) determinant (the method will be named LCI). Two questions of interest related to model Hamiltonians will be discussed: (i) when including long-range interactions, how crucial is including in the Hamiltonian the electronic charge that compensates ion charges? (ii) Is it possible to reduce a Hamiltonian incorporating Coulomb interactions (PPP) to an 'effective' Hamiltonian including only on-site interactions (Hubbard)? The performance of CI will be checked on small molecules. The electronic structure of azulene and fused azulene will be used to illustrate several aspects of the method. As regards graphene, several questions will be considered: (i) paramagnetic versus antiferromagnetic solutions, (ii) forbidden gap versus dot size, (iii) graphene nano-ribbons, and (iv) optical properties.
Resumo:
Robotics is a field that presents a large number of problems because it depends on a large number of disciplines, devices, technologies and tasks. Its expansion from perfectly controlled industrial environments toward open and dynamic environment presents a many new challenges, such as robots household robots or professional robots. To facilitate the rapid development of robotic systems, low cost, reusability of code, its medium and long term maintainability and robustness are required novel approaches to provide generic models and software systems who develop paradigms capable of solving these problems. For this purpose, in this paper we propose a model based on multi-agent systems inspired by the human nervous system able to transfer the control characteristics of the biological system and able to take advantage of the best properties of distributed software systems.
Resumo:
The robotics is one of the most active areas. We also need to join a large number of disciplines to create robots. With these premises, one problem is the management of information from multiple heterogeneous sources. Each component, hardware or software, produces data with different nature: temporal frequencies, processing needs, size, type, etc. Nowadays, technologies and software engineering paradigms such as service-oriented architectures are applied to solve this problem in other areas. This paper proposes the use of these technologies to implement a robotic control system based on services. This type of system will allow integration and collaborative work of different elements that make up a robotic system.
Resumo:
A parallel algorithm for image noise removal is proposed. The algorithm is based on peer group concept and uses a fuzzy metric. An optimization study on the use of the CUDA platform to remove impulsive noise using this algorithm is presented. Moreover, an implementation of the algorithm on multi-core platforms using OpenMP is presented. Performance is evaluated in terms of execution time and a comparison of the implementation parallelised in multi-core, GPUs and the combination of both is conducted. A performance analysis with large images is conducted in order to identify the amount of pixels to allocate in the CPU and GPU. The observed time shows that both devices must have work to do, leaving the most to the GPU. Results show that parallel implementations of denoising filters on GPUs and multi-cores are very advisable, and they open the door to use such algorithms for real-time processing.
Resumo:
Information Retrieval systems normally have to work with rather heterogeneous sources, such as Web sites or documents from Optical Character Recognition tools. The correct conversion of these sources into flat text files is not a trivial task since noise may easily be introduced as a result of spelling or typeset errors. Interestingly, this is not a great drawback when the size of the corpus is sufficiently large, since redundancy helps to overcome noise problems. However, noise becomes a serious problem in restricted-domain Information Retrieval specially when the corpus is small and has little or no redundancy. This paper devises an approach which adds noise-tolerance to Information Retrieval systems. A set of experiments carried out in the agricultural domain proves the effectiveness of the approach presented.
Resumo:
The design of fault tolerant systems is gaining importance in large domains of embedded applications where design constrains are as important as reliability. New software techniques, based on selective application of redundancy, have shown remarkable fault coverage with reduced costs and overheads. However, the large number of different solutions provided by these techniques, and the costly process to assess their reliability, make the design space exploration a very difficult and time-consuming task. This paper proposes the integration of a multi-objective optimization tool with a software hardening environment to perform an automatic design space exploration in the search for the best trade-offs between reliability, cost, and performance. The first tool is commanded by a genetic algorithm which can simultaneously fulfill many design goals thanks to the use of the NSGA-II multi-objective algorithm. The second is a compiler-based infrastructure that automatically produces selective protected (hardened) versions of the software and generates accurate overhead reports and fault coverage estimations. The advantages of our proposal are illustrated by means of a complex and detailed case study involving a typical embedded application, the AES (Advanced Encryption Standard).
Resumo:
Purpose – The purpose of this paper is to analyse Information Systems outsourcing success, measuring the latter according to the satisfaction level achieved by users and taking into account three success factors: the role played by the client firm’s top management; the relationships between client and provider; and the degree of outsourcing. Design/methodology/approach – A survey was carried out by means of a questionnaire answered by 398 large Spanish firms. Its results were examined using the partial least squares software and through the proposal of a structural equation model. Findings – The conclusions reveal that the perceived benefits play a mediating role in outsourcing satisfaction and also that these benefits can be grouped together into three categories: strategic; economic; and technological ones. Originality/value – The study identifies how some success factors will be more influent than others depending which type of benefits are ultimately sought with outsourcing.