857 resultados para Computing Classification Systems
Resumo:
This article reviews recent studies of memory systems in humans and nonhuman primates. Three major conclusions from recent work are that (i) the capacity for nondeclarative (nonconscious) learning can now be studied in a broad array of tasks that assess classification learning, perceptuomotor skill learning, artificial grammar learning, and prototype abstraction; (ii) cortical areas adjacent to the hippocampal formation, including entorhinal, perirhinal, and parahippocampal cortices, are an essential part of the medial temporal lobe memory system that supports declarative (conscious) memory; and (iii) in humans, bilateral damage limited to the hippocampal formation is nevertheless sufficient to produce severe anterograde amnesia and temporally graded retrograde amnesia covering as much as 25 years.
Resumo:
We describe Janus, a massively parallel FPGA-based computer optimized for the simulation of spin glasses, theoretical models for the behavior of glassy materials. FPGAs (as compared to GPUs or many-core processors) provide a complementary approach to massively parallel computing. In particular, our model problem is formulated in terms of binary variables, and floating-point operations can be (almost) completely avoided. The FPGA architecture allows us to run many independent threads with almost no latencies in memory access, thus updating up to 1024 spins per cycle. We describe Janus in detail and we summarize the physics results obtained in four years of operation of this machine; we discuss two types of physics applications: long simulations on very large systems (which try to mimic and provide understanding about the experimental non equilibrium dynamics), and low-temperature equilibrium simulations using an artificial parallel tempering dynamics. The time scale of our non-equilibrium simulations spans eleven orders of magnitude (from picoseconds to a tenth of a second). On the other hand, our equilibrium simulations are unprecedented both because of the low temperatures reached and for the large systems that we have brought to equilibrium. A finite-time scaling ansatz emerges from the detailed comparison of the two sets of simulations. Janus has made it possible to perform spin glass simulations that would take several decades on more conventional architectures. The paper ends with an assessment of the potential of possible future versions of the Janus architecture, based on state-of-the-art technology.
Resumo:
Os smart grids representam a nova geração dos sistemas elétricos de potência, combinando avanços em computação, sistemas de comunicação, processos distribuídos e inteligência artificial para prover novas funcionalidades quanto ao acompanhamento em tempo real da demanda e do consumo de energia elétrica, gerenciamento em larga escala de geradores distribuídos, entre outras, a partir de um sistema de controle distribuído sobre a rede elétrica. Esta estrutura modifica profundamente a maneira como se realiza o planejamento e a operação de sistemas elétricos nos dias de hoje, em especial os de distribuição, e há interessantes possibilidades de pesquisa e desenvolvimento possibilitada pela busca da implementação destas funcionalidades. Com esse cenário em vista, o presente trabalho utiliza uma abordagem baseada no uso de sistemas multiagentes para simular esse tipo de sistema de distribuição de energia elétrica, considerando opções de controle distintas. A utilização da tecnologia de sistemas multiagentes para a simulação é baseada na conceituação de smart grids como um sistema distribuído, algo também realizado nesse trabalho. Para validar a proposta, foram simuladas três funcionalidades esperadas dessas redes elétricas: classificação de cargas não-lineares; gerenciamento de perfil de tensão; e reconfiguração topológica com a finalidade de reduzir as perdas elétricas. Todas as modelagens e desenvolvimentos destes estudos estão aqui relatados. Por fim, o trabalho se propõe a identificar os sistemas multiagentes como uma tecnologia a ser empregada tanto para a pesquisa, quanto para implementação dessas redes elétricas.
Resumo:
A ciência tem feito uso frequente de recursos computacionais para execução de experimentos e processos científicos, que podem ser modelados como workflows que manipulam grandes volumes de dados e executam ações como seleção, análise e visualização desses dados segundo um procedimento determinado. Workflows científicos têm sido usados por cientistas de várias áreas, como astronomia e bioinformática, e tendem a ser computacionalmente intensivos e fortemente voltados à manipulação de grandes volumes de dados, o que requer o uso de plataformas de execução de alto desempenho como grades ou nuvens de computadores. Para execução dos workflows nesse tipo de plataforma é necessário o mapeamento dos recursos computacionais disponíveis para as atividades do workflow, processo conhecido como escalonamento. Plataformas de computação em nuvem têm se mostrado um alternativa viável para a execução de workflows científicos, mas o escalonamento nesse tipo de plataforma geralmente deve considerar restrições específicas como orçamento limitado ou o tipo de recurso computacional a ser utilizado na execução. Nesse contexto, informações como a duração estimada da execução ou limites de tempo e de custo (chamadas aqui de informações de suporte ao escalonamento) são importantes para garantir que o escalonamento seja eficiente e a execução ocorra de forma a atingir os resultados esperados. Este trabalho identifica as informações de suporte que podem ser adicionadas aos modelos de workflows científicos para amparar o escalonamento e a execução eficiente em plataformas de computação em nuvem. É proposta uma classificação dessas informações, e seu uso nos principais Sistemas Gerenciadores de Workflows Científicos (SGWC) é analisado. Para avaliar o impacto do uso das informações no escalonamento foram realizados experimentos utilizando modelos de workflows científicos com diferentes informações de suporte, escalonados com algoritmos que foram adaptados para considerar as informações inseridas. Nos experimentos realizados, observou-se uma redução no custo financeiro de execução do workflow em nuvem de até 59% e redução no makespan chegando a 8,6% se comparados à execução dos mesmos workflows sendo escalonados sem nenhuma informação de suporte disponível.
Resumo:
In this paper, we present a novel coarse-to-fine visual localization approach: contextual visual localization. This approach relies on three elements: (i) a minimal-complexity classifier for performing fast coarse localization (submap classification); (ii) an optimized saliency detector which exploits the visual statistics of the submap; and (iii) a fast view-matching algorithm which filters initial matchings with a structural criterion. The latter algorithm yields fine localization. Our experiments show that these elements have been successfully integrated for solving the global localization problem. Context, that is, the awareness of being in a particular submap, is defined by a supervised classifier tuned for a minimal set of features. Visual context is exploited both for tuning (optimizing) the saliency detection process, and to select potential matching views in the visual database, close enough to the query view.
Resumo:
Poster presented in the 11th Mediterranean Congress of Chemical Engineering, Barcelona, October 21-24, 2008.
Resumo:
The original motivation for this paper was to provide an efficient quantitative analysis of convex infinite (or semi-infinite) inequality systems whose decision variables run over general infinite-dimensional (resp. finite-dimensional) Banach spaces and that are indexed by an arbitrary fixed set J. Parameter perturbations on the right-hand side of the inequalities are required to be merely bounded, and thus the natural parameter space is l ∞(J). Our basic strategy consists of linearizing the parameterized convex system via splitting convex inequalities into linear ones by using the Fenchel–Legendre conjugate. This approach yields that arbitrary bounded right-hand side perturbations of the convex system turn on constant-by-blocks perturbations in the linearized system. Based on advanced variational analysis, we derive a precise formula for computing the exact Lipschitzian bound of the feasible solution map of block-perturbed linear systems, which involves only the system’s data, and then show that this exact bound agrees with the coderivative norm of the aforementioned mapping. In this way we extend to the convex setting the results of Cánovas et al. (SIAM J. Optim. 20, 1504–1526, 2009) developed for arbitrary perturbations with no block structure in the linear framework under the boundedness assumption on the system’s coefficients. The latter boundedness assumption is removed in this paper when the decision space is reflexive. The last section provides the aimed application to the convex case.
Resumo:
A parallel algorithm for image noise removal is proposed. The algorithm is based on peer group concept and uses a fuzzy metric. An optimization study on the use of the CUDA platform to remove impulsive noise using this algorithm is presented. Moreover, an implementation of the algorithm on multi-core platforms using OpenMP is presented. Performance is evaluated in terms of execution time and a comparison of the implementation parallelised in multi-core, GPUs and the combination of both is conducted. A performance analysis with large images is conducted in order to identify the amount of pixels to allocate in the CPU and GPU. The observed time shows that both devices must have work to do, leaving the most to the GPU. Results show that parallel implementations of denoising filters on GPUs and multi-cores are very advisable, and they open the door to use such algorithms for real-time processing.
Resumo:
In the chemical textile domain experts have to analyse chemical components and substances that might be harmful for their usage in clothing and textiles. Part of this analysis is performed searching opinions and reports people have expressed concerning these products in the Social Web. However, this type of information on the Internet is not as frequent for this domain as for others, so its detection and classification is difficult and time-consuming. Consequently, problems associated to the use of chemical substances in textiles may not be detected early enough, and could lead to health problems, such as allergies or burns. In this paper, we propose a framework able to detect, retrieve, and classify subjective sentences related to the chemical textile domain, that could be integrated into a wider health surveillance system. We also describe the creation of several datasets with opinions from this domain, the experiments performed using machine learning techniques and different lexical resources such as WordNet, and the evaluation focusing on the sentiment classification, and complaint detection (i.e., negativity). Despite the challenges involved in this domain, our approach obtains promising results with an F-score of 65% for polarity classification and 82% for complaint detection.
Resumo:
Background: The harmonization of European health systems brings with it a need for tools to allow the standardized collection of information about medical care. A common coding system and standards for the description of services are needed to allow local data to be incorporated into evidence-informed policy, and to permit equity and mobility to be assessed. The aim of this project has been to design such a classification and a related tool for the coding of services for Long Term Care (DESDE-LTC), based on the European Service Mapping Schedule (ESMS). Methods: The development of DESDE-LTC followed an iterative process using nominal groups in 6 European countries. 54 researchers and stakeholders in health and social services contributed to this process. In order to classify services, we use the minimal organization unit or “Basic Stable Input of Care” (BSIC), coded by its principal function or “Main Type of Care” (MTC). The evaluation of the tool included an analysis of feasibility, consistency, ontology, inter-rater reliability, Boolean Factor Analysis, and a preliminary impact analysis (screening, scoping and appraisal). Results: DESDE-LTC includes an alpha-numerical coding system, a glossary and an assessment instrument for mapping and counting LTC. It shows high feasibility, consistency, inter-rater reliability and face, content and construct validity. DESDE-LTC is ontologically consistent. It is regarded by experts as useful and relevant for evidence-informed decision making. Conclusion: DESDE-LTC contributes to establishing a common terminology, taxonomy and coding of LTC services in a European context, and a standard procedure for data collection and international comparison.
Resumo:
his paper discusses a process to graphically view and analyze information obtained from a network of urban streets, using an algorithm that establishes a ranking of importance of the nodes of the network itself. The basis of this process is to quantify the network information obtained by assigning numerical values to each node, representing numerically the information. These values are used to construct a data matrix that allows us to apply a classification algorithm of nodes in a network in order of importance. From this numerical ranking of the nodes, the process finish with the graphical visualization of the network. An example is shown to illustrate the whole process.
Resumo:
In this project, we propose the implementation of a 3D object recognition system which will be optimized to operate under demanding time constraints. The system must be robust so that objects can be recognized properly in poor light conditions and cluttered scenes with significant levels of occlusion. An important requirement must be met: the system must exhibit a reasonable performance running on a low power consumption mobile GPU computing platform (NVIDIA Jetson TK1) so that it can be integrated in mobile robotics systems, ambient intelligence or ambient assisted living applications. The acquisition system is based on the use of color and depth (RGB-D) data streams provided by low-cost 3D sensors like Microsoft Kinect or PrimeSense Carmine. The range of algorithms and applications to be implemented and integrated will be quite broad, ranging from the acquisition, outlier removal or filtering of the input data and the segmentation or characterization of regions of interest in the scene to the very object recognition and pose estimation. Furthermore, in order to validate the proposed system, we will create a 3D object dataset. It will be composed by a set of 3D models, reconstructed from common household objects, as well as a handful of test scenes in which those objects appear. The scenes will be characterized by different levels of occlusion, diverse distances from the elements to the sensor and variations on the pose of the target objects. The creation of this dataset implies the additional development of 3D data acquisition and 3D object reconstruction applications. The resulting system has many possible applications, ranging from mobile robot navigation and semantic scene labeling to human-computer interaction (HCI) systems based on visual information.
Resumo:
Comunicación presentada en las V Jornadas de Computación Empotrada, Valladolid, 17-19 Septiembre 2014
Resumo:
Dissertação apresentada ao Instituto Politécnico de Castelo Branco para cumprimento dos requisitos necessários à obtenção do grau de Mestre em Desenvolvimento de Software e Sistemas Interactivos, realizada sob a orientação científica do Doutor Osvaldo Arede dos Santos, Professor Adjunto da Unidade Técnico Científica de Informática da Escola Superior de Tecnologia do Instituto Politécnico de Castelo Branco.