949 resultados para Parallel programming (computer)
Resumo:
This paper presents an approximation to the study of parallel systems using sequential tools. The Independent And-parallelism in Prolog is an example of parallel processing paradigm in the framework of logic programming, and implementations like <fc-Prolog uncover the potential performance of parallel processing. But this potential can also be explored using only sequential systems. Being the spirit of this paper to show how this can be done with a standard system, only standard Prolog will be used in the implementations included. Such implementations include tests for parallelism in And-Prolog, a correctnesschecking meta-interpreter of <fc-Prolog and a simulator of parallel execution for <fc-Prolog.
Resumo:
The crop simulation model AquaCrop, recently developed by FAO can be used for a wide range of purposes. However, in its present form, its use over large areas or for applications that require a large number of simulations runs (e.g., long-term analysis), is not practical without developing software to facilitate such applications. Two tools for managing the inputs and outputs of AquaCrop, named AquaData and AquaGIS, have been developed for this purpose and are presented here. Both software utilities have been programmed in Delphi v. 5 and in addition, AquaGIS requires the Geographic Information System (GIS) programming tool MapObjects. These utilities allow the efficient management of input and output files, along with a GIS module to develop spatial analysis and effect spatial visualization of the results, facilitating knowledge dissemination. A sample of application of the utilities is given here, as an AquaCrop simulation analysis of impact of climate change on wheat yield in Southern Spain, which requires extensive input data preparation and output processing. The use of AquaCrop without the two utilities would have required approximately 1000 h of work, while the utilization of AquaData and AquaGIS reduced that time by more than 99%. Furthermore, the use of GIS, made it possible to perform a spatial analysis of the results, thus providing a new option to extend the use of the AquaCrop model to scales requiring spatial and temporal analyses.
Resumo:
A method for formulating and algorithmically solving the equations of finite element problems is presented. The method starts with a parametric partition of the domain in juxtaposed strips that permits sweeping the whole region by a sequential addition (or removal) of adjacent strips. The solution of the difference equations constructed over that grid proceeds along with the addition removal of strips in a manner resembling the transfer matrix approach, except that different rules of composition that lead to numerically stable algorithms are used for the stiffness matrices of the strips. Dynamic programming and invariant imbedding ideas underlie the construction of such rules of composition. Among other features of interest, the present methodology provides to some extent the analyst's control over the type and quantity of data to be computed. In particular, the one-sweep method presented in Section 9, with no apparent counterpart in standard methods, appears to be very efficient insofar as time and storage is concerned. The paper ends with the presentation of a numerical example
Resumo:
The main purpose of this work is to describe the case of an online Java Programming course for engineering students to learn computer programming and to practice other non-technicalabilities: online training, self-assessment, teamwork and use of foreign languages. It is important that students develop confidence and competence in these skills, which will be required later in their professional tasks and/or in other engineering courses (life-long learning). Furthermore, this paper presents the pedagogical methodology, the results drawn from this experience and an objective performance comparison with another conventional (face-to-face) Java course.
Resumo:
En este proyecto, se presenta un informe técnico sobre la cámara Leap Motion y el Software Development Kit correspondiente, el cual es un dispositivo con una cámara de profundidad orientada a interfaces hombre-máquina. Esto es realizado con el propósito de desarrollar una interfaz hombre-máquina basada en un sistema de reconocimiento de gestos de manos. Después de un exhaustivo estudio de la cámara Leap Motion, se han realizado diversos programas de ejemplo con la intención de verificar las capacidades descritas en el informe técnico, poniendo a prueba la Application Programming Interface y evaluando la precisión de las diferentes medidas obtenidas sobre los datos de la cámara. Finalmente, se desarrolla un prototipo de un sistema de reconocimiento de gestos. Los datos sobre la posición y orientación de la punta de los dedos obtenidos de la Leap Motion son usados para describir un gesto mediante un vector descriptor, el cual es enviado a una Máquina Vectores Soporte, utilizada como clasificador multi-clase.
Resumo:
The operating theatres are the engine of the hospitals; proper management of the operating rooms and its staff represents a great challenge for managers and its results impact directly in the budget of the hospital. This work presents a MILP model for the efficient schedule of multiple surgeries in Operating Rooms (ORs) during a working day. This model considers multiple surgeons and ORs and different types of surgeries. Stochastic strategies are also implemented for taking into account the uncertain in surgery durations (pre-incision, incision, post-incision times). In addition, a heuristic-based methods and a MILP decomposition approach is proposed for solving large-scale ORs scheduling problems in computational efficient way. All these computer-aided strategies has been implemented in AIMMS, as an advanced modeling and optimization software, developing a user friendly solution tool for the operating room management under uncertainty.
Resumo:
Los sistemas empotrados son cada dÃa más comunes y complejos, de modo que encontrar procesos seguros, eficaces y baratos de desarrollo software dirigidos especÃficamente a esta clase de sistemas es más necesario que nunca. A diferencia de lo que ocurrÃa hasta hace poco, en la actualidad los avances tecnológicos en el campo de los microprocesadores de los últimos tiempos permiten el desarrollo de equipos con prestaciones más que suficientes para ejecutar varios sistemas software en una única máquina. Además, hay sistemas empotrados con requisitos de seguridad (safety) de cuyo correcto funcionamiento depende la vida de muchas personas y/o grandes inversiones económicas. Estos sistemas software se diseñan e implementan de acuerdo con unos estándares de desarrollo software muy estrictos y exigentes. En algunos casos puede ser necesaria también la certificación del software. Para estos casos, los sistemas con criticidades mixtas pueden ser una alternativa muy valiosa. En esta clase de sistemas, aplicaciones con diferentes niveles de criticidad se ejecutan en el mismo computador. Sin embargo, a menudo es necesario certificar el sistema entero con el nivel de criticidad de la aplicación más crÃtica, lo que hace que los costes se disparen. La virtualización se ha postulado como una tecnologÃa muy interesante para contener esos costes. Esta tecnologÃa permite que un conjunto de máquinas virtuales o particiones ejecuten las aplicaciones con unos niveles de aislamiento tanto temporal como espacial muy altos. Esto, a su vez, permite que cada partición pueda ser certificada independientemente. Para el desarrollo de sistemas particionados con criticidades mixtas se necesita actualizar los modelos de desarrollo software tradicionales, pues estos no cubren ni las nuevas actividades ni los nuevos roles que se requieren en el desarrollo de estos sistemas. Por ejemplo, el integrador del sistema debe definir las particiones o el desarrollador de aplicaciones debe tener en cuenta las caracterÃsticas de la partición donde su aplicación va a ejecutar. Tradicionalmente, en el desarrollo de sistemas empotrados, el modelo en V ha tenido una especial relevancia. Por ello, este modelo ha sido adaptado para tener en cuenta escenarios tales como el desarrollo en paralelo de aplicaciones o la incorporación de una nueva partición a un sistema ya existente. El objetivo de esta tesis doctoral es mejorar la tecnologÃa actual de desarrollo de sistemas particionados con criticidades mixtas. Para ello, se ha diseñado e implementado un entorno dirigido especÃficamente a facilitar y mejorar los procesos de desarrollo de esta clase de sistemas. En concreto, se ha creado un algoritmo que genera el particionado del sistema automáticamente. En el entorno de desarrollo propuesto, se han integrado todas las actividades necesarias para desarrollo de un sistema particionado, incluidos los nuevos roles y actividades mencionados anteriormente. Además, el diseño del entorno de desarrollo se ha basado en la ingenierÃa guiada por modelos (Model-Driven Engineering), la cual promueve el uso de los modelos como elementos fundamentales en el proceso de desarrollo. Asà pues, se proporcionan las herramientas necesarias para modelar y particionar el sistema, asà como para validar los resultados y generar los artefactos necesarios para el compilado, construcción y despliegue del mismo. Además, en el diseño del entorno de desarrollo, la extensión e integración del mismo con herramientas de validación ha sido un factor clave. En concreto, se pueden incorporar al entorno de desarrollo nuevos requisitos no-funcionales, la generación de nuevos artefactos tales como documentación o diferentes lenguajes de programación, etc. Una parte clave del entorno de desarrollo es el algoritmo de particionado. Este algoritmo se ha diseñado para ser independiente de los requisitos de las aplicaciones asà como para permitir al integrador del sistema implementar nuevos requisitos del sistema. Para lograr esta independencia, se han definido las restricciones al particionado. El algoritmo garantiza que dichas restricciones se cumplirán en el sistema particionado que resulte de su ejecución. Las restricciones al particionado se han diseñado con una capacidad expresiva suficiente para que, con un pequeño grupo de ellas, se puedan expresar la mayor parte de los requisitos no-funcionales más comunes. Las restricciones pueden ser definidas manualmente por el integrador del sistema o bien pueden ser generadas automáticamente por una herramienta a partir de los requisitos funcionales y no-funcionales de una aplicación. El algoritmo de particionado toma como entradas los modelos y las restricciones al particionado del sistema. Tras la ejecución y como resultado, se genera un modelo de despliegue en el que se definen las particiones que son necesarias para el particionado del sistema. A su vez, cada partición define qué aplicaciones deben ejecutar en ella asà como los recursos que necesita la partición para ejecutar correctamente. El problema del particionado y las restricciones al particionado se modelan matemáticamente a través de grafos coloreados. En dichos grafos, un coloreado propio de los vértices representa un particionado del sistema correcto. El algoritmo se ha diseñado también para que, si es necesario, sea posible obtener particionados alternativos al inicialmente propuesto. El entorno de desarrollo, incluyendo el algoritmo de particionado, se ha probado con éxito en dos casos de uso industriales: el satélite UPMSat-2 y un demostrador del sistema de control de una turbina eólica. Además, el algoritmo se ha validado mediante la ejecución de numerosos escenarios sintéticos, incluyendo algunos muy complejos, de más de 500 aplicaciones. ABSTRACT The importance of embedded software is growing as it is required for a large number of systems. Devising cheap, efficient and reliable development processes for embedded systems is thus a notable challenge nowadays. Computer processing power is continuously increasing, and as a result, it is currently possible to integrate complex systems in a single processor, which was not feasible a few years ago.Embedded systems may have safety critical requirements. Its failure may result in personal or substantial economical loss. The development of these systems requires stringent development processes that are usually defined by suitable standards. In some cases their certification is also necessary. This scenario fosters the use of mixed-criticality systems in which applications of different criticality levels must coexist in a single system. In these cases, it is usually necessary to certify the whole system, including non-critical applications, which is costly. Virtualization emerges as an enabling technology used for dealing with this problem. The system is structured as a set of partitions, or virtual machines, that can be executed with temporal and spatial isolation. In this way, applications can be developed and certified independently. The development of MCPS (Mixed-Criticality Partitioned Systems) requires additional roles and activities that traditional systems do not require. The system integrator has to define system partitions. Application development has to consider the characteristics of the partition to which it is allocated. In addition, traditional software process models have to be adapted to this scenario. The V-model is commonly used in embedded systems development. It can be adapted to the development of MCPS by enabling the parallel development of applications or adding an additional partition to an existing system. The objective of this PhD is to improve the available technology for MCPS development by providing a framework tailored to the development of this type of system and by defining a flexible and efficient algorithm for automatically generating system partitionings. The goal of the framework is to integrate all the activities required for developing MCPS and to support the different roles involved in this process. The framework is based on MDE (Model-Driven Engineering), which emphasizes the use of models in the development process. The framework provides basic means for modeling the system, generating system partitions, validating the system and generating final artifacts. The framework has been designed to facilitate its extension and the integration of external validation tools. In particular, it can be extended by adding support for additional non-functional requirements and support for final artifacts, such as new programming languages or additional documentation. The framework includes a novel partitioning algorithm. It has been designed to be independent of the types of applications requirements and also to enable the system integrator to tailor the partitioning to the specific requirements of a system. This independence is achieved by defining partitioning constraints that must be met by the resulting partitioning. They have sufficient expressive capacity to state the most common constraints and can be defined manually by the system integrator or generated automatically based on functional and non-functional requirements of the applications. The partitioning algorithm uses system models and partitioning constraints as its inputs. It generates a deployment model that is composed by a set of partitions. Each partition is in turn composed of a set of allocated applications and assigned resources. The partitioning problem, including applications and constraints, is modeled as a colored graph. A valid partitioning is a proper vertex coloring. A specially designed algorithm generates this coloring and is able to provide alternative partitions if required. The framework, including the partitioning algorithm, has been successfully used in the development of two industrial use cases: the UPMSat-2 satellite and the control system of a wind-power turbine. The partitioning algorithm has been successfully validated by using a large number of synthetic loads, including complex scenarios with more that 500 applications.
Resumo:
En esta tesis se estudia la representación, modelado y comparación de colecciones mediante el uso de ontologÃas en el ámbito de la Web Semántica. Las colecciones, entendidas como agrupaciones de objetos o elementos con entidad propia, son construcciones que aparecen frecuentemente en prácticamente todos los dominios del mundo real, y por tanto, es imprescindible disponer de conceptualizaciones de estas estructuras abstractas y de representaciones de estas conceptualizaciones en los sistemas informáticos, que definan adecuadamente su semántica. Mientras que en muchos ámbitos de la Informática y la Inteligencia Artificial, como por ejemplo la programación, las bases de datos o la recuperación de información, las colecciones han sido ampliamente estudiadas y se han desarrollado representaciones que responden a multitud de conceptualizaciones, en el ámbito de la Web Semántica, sin embargo, su estudio ha sido bastante limitado. De hecho hasta la fecha existen pocas propuestas de representación de colecciones mediante ontologÃas, y las que hay sólo cubren algunos tipos de colecciones y presentan importantes limitaciones. Esto impide la representación adecuada de colecciones y dificulta otras tareas comunes como la comparación de colecciones, algo crÃtico en operaciones habituales como las búsquedas semánticas o el enlazado de datos en la Web Semántica. Para solventar este problema esta tesis hace una propuesta de modelización de colecciones basada en una nueva clasificación de colecciones de acuerdo a sus caracterÃsticas estructurales (homogeneidad, unicidad, orden y cardinalidad). Esta clasificación permite definir una taxonomÃa con hasta 16 tipos de colecciones distintas. Entre otras ventajas, esta nueva clasificación permite aprovechar la semántica de las propiedades estructurales de cada tipo de colección para realizar comparaciones utilizando las funciones de similitud y disimilitud más apropiadas. De este modo, la tesis desarrolla además un nuevo catálogo de funciones de similitud para las distintas colecciones, donde se han recogido las funciones de (di)similitud más conocidas y también algunas nuevas. Esta propuesta se ha implementado mediante dos ontologÃas paralelas, la ontologÃa E-Collections, que representa los distintos tipos de colecciones de la taxonomÃa y su axiomática, y la ontologÃa SIMEON (Similarity Measures Ontology) que representa los tipos de funciones de (di)similitud para cada tipo de colección. Gracias a estas ontologÃas, para comparar dos colecciones, una vez representadas como instancias de la clase más apropiada de la ontologÃa E-Collections, automáticamente se sabe qué funciones de (di)similitud de la ontologÃa SIMEON pueden utilizarse para su comparación. Abstract This thesis studies the representation, modeling and comparison of collections in the Semantic Web using ontologies. Collections, understood as groups of objects or elements with their own identities, are constructions that appear frequently in almost all areas of the real world. Therefore, it is essential to have conceptualizations of these abstract structures and representations of these conceptualizations in computer systems, that define their semantic properly. While in many areas of Computer Science and Artificial Intelligence, such as Programming, Databases or Information Retrieval, the collections have been extensively studied and there are representations that match many conceptualizations, in the field Semantic Web, however, their study has been quite limited. In fact, there are few representations of collections using ontologies so far, and they only cover some types of collections and have important limitations. This hinders a proper representation of collections and other common tasks like comparing collections, something critical in usual operations such as semantic search or linking data on the Semantic Web. To solve this problem this thesis makes a proposal for modelling collections based on a new classification of collections according to their structural characteristics (homogeneity, uniqueness, order and cardinality). This classification allows to define a taxonomy with up to 16 different types of collections. Among other advantages, this new classification can leverage the semantics of the structural properties of each type of collection to make comparisons using the most appropriate (dis)similarity functions. Thus, the thesis also develops a new catalog of similarity functions for the different types of collections. This catalog contains the most common (dis)similarity functions as well as new ones. This proposal is implemented through two parallel ontologies, the E-Collections ontology that represents the different types of collections in the taxonomy and their axiomatic, and the SIMEON ontology (Similarity Measures Ontology) that represents the types of (dis)similarity functions for each type of collection. Thanks to these ontologies, to compare two collections, once represented as instances of the appropriate class of E-Collections ontology, we can know automatically which (dis)similarity functions of the SIMEON ontology are suitable for the comparison. Finally, the feasibility and usefulness of this modeling and comparison of collections proposal is proved in the field of oenology, applying both E-Collections and SIMEON ontologies to the representation and comparison of wines with the E-Baco ontology.
Resumo:
El sistema de energÃa eólica-diesel hÃbrido tiene un gran potencial en la prestación de suministro de energÃa a comunidades remotas. En comparación con los sistemas tradicionales de diesel, las plantas de energÃa hÃbridas ofrecen grandes ventajas tales como el suministro de capacidad de energÃa extra para "microgrids", reducción de los contaminantes y emisiones de gases de efecto invernadero, y la cobertura del riesgo de aumento inesperado del precio del combustible. El principal objetivo de la presente tesis es proporcionar nuevos conocimientos para la evaluación y optimización de los sistemas de energÃa hÃbrido eólico-diesel considerando las incertidumbres. Dado que la energÃa eólica es una variable estocástica, ésta no puede ser controlada ni predecirse con exactitud. La naturaleza incierta del viento como fuente de energÃa produce serios problemas tanto para la operación como para la evaluación del valor del sistema de energÃa eólica-diesel hÃbrido. Por un lado, la regulación de la potencia inyectada desde las turbinas de viento es una difÃcil tarea cuando opera el sistema hÃbrido. Por otro lado, el bene.cio económico de un sistema eólico-diesel hÃbrido se logra directamente a través de la energÃa entregada a la red de alimentación de la energÃa eólica. Consecuentemente, la incertidumbre de los recursos eólicos incrementa la dificultad de estimar los beneficios globales en la etapa de planificación. La principal preocupación del modelo tradicional determinista es no tener en cuenta la incertidumbre futura a la hora de tomar la decisión de operación. Con lo cual, no se prevé las acciones operativas flexibles en respuesta a los escenarios futuros. El análisis del rendimiento y simulación por ordenador en el Proyecto Eólico San Cristóbal demuestra que la incertidumbre sobre la energÃa eólica, las estrategias de control, almacenamiento de energÃa, y la curva de potencia de aerogeneradores tienen un impacto significativo sobre el rendimiento del sistema. En la presente tesis, se analiza la relación entre la teorÃa de valoración de opciones y el proceso de toma de decisiones. La opción real se desarrolla con un modelo y se presenta a través de ejemplos prácticos para evaluar el valor de los sistemas de energÃa eólica-diesel hÃbridos. Los resultados muestran que las opciones operacionales pueden aportar un valor adicional para el sistema de energÃa hÃbrida, cuando esta flexibilidad operativa se utiliza correctamente. Este marco se puede aplicar en la optimización de la operación a corto plazo teniendo en cuenta la naturaleza dependiente de la trayectoria de la polÃtica óptima de despacho, dadas las plausibles futuras realizaciones de la producción de energÃa eólica. En comparación con los métodos de valoración y optimización existentes, el resultado del caso de estudio numérico muestra que la polÃtica de operación resultante del modelo de optimización propuesto presenta una notable actuación en la reducción del con- sumo total de combustible del sistema eólico-diesel. Con el .n de tomar decisiones óptimas, los operadores de plantas de energÃa y los gestores de éstas no deben centrarse sólo en el resultado directo de cada acción operativa, tampoco deberÃan tomar decisiones deterministas. La forma correcta es gestionar dinámicamente el sistema de energÃa teniendo en cuenta el valor futuro condicionado en cada opción frente a la incertidumbre. ABSTRACT Hybrid wind-diesel power systems have a great potential in providing energy supply to remote communities. Compared with the traditional diesel systems, hybrid power plants are providing many advantages such as providing extra energy capacity to the micro-grid, reducing pollution and greenhouse-gas emissions, and hedging the risk of unexpected fuel price increases. This dissertation aims at providing novel insights for assessing and optimizing hybrid wind-diesel power systems considering the related uncertainties. Since wind power can neither be controlled nor accurately predicted, the energy harvested from a wind turbine may be considered a stochastic variable. This uncertain nature of wind energy source results in serious problems for both the operation and value assessment of the hybrid wind-diesel power system. On the one hand, regulating the uncertain power injected from wind turbines is a difficult task when operating the hybrid system. On the other hand, the economic profit of a hybrid wind-diesel system is achieved directly through the energy delivered to the power grid from the wind energy. Therefore, the uncertainty of wind resources has increased the difficulty in estimating the total benefits in the planning stage. The main concern of the traditional deterministic model is that it does not consider the future uncertainty when making the dispatch decision. Thus, it does not provide flexible operational actions in response to the uncertain future scenarios. Performance analysis and computer simulation on the San Cristobal Wind Project demonstrate that the wind power uncertainty, control strategies, energy storage, and the wind turbine power curve have a significant impact on the performance of the system. In this dissertation, the relationship between option pricing theory and decision making process is discussed. A real option model is developed and presented through practical examples for assessing the value of hybrid wind-diesel power systems. Results show that operational options can provide additional value to the hybrid power system when this operational flexibility is correctly utilized. This framework can be applied in optimizing short term dispatch decisions considering the path-dependent nature of the optimal dispatch policy, given the plausible future realizations of the wind power production. Comparing with the existing valuation and optimization methods, result from numerical example shows that the dispatch policy resulting from the proposed optimization model exhibits a remarkable performance in minimizing the total fuel consumption of the wind-diesel system. In order to make optimal decisions, power plant operators and managers should not just focus on the direct outcome of each operational action; neither should they make deterministic decisions. The correct way is to dynamically manage the power system by taking into consideration the conditional future value in each option in response to the uncertainty.
Resumo:
This paper presents an online C compiler designed so that students can program their practical assignments in Programming courses. What is really innovative is the self-assessment of the exercises based on black-box tests and train students’ skill to test software. Moreover, this tool lets instructors, not only proposing and classifying practical exercises, but also evaluating automatically the efforts dedicated and the results obtained by the students. The system has been applied to the 1st-year students at the Industrial Engineering specialization at the Universidad Politecnica de Madrid. Results show that the students obtained better academic performance, reducing the failure rate in the practical exam considerably with respect to previous years, in addition that an anonymous survey proved that students are satisfied with the system because they get instant feedback about their programs.
Resumo:
We report automated DNA sequencing in 16-channel microchips. A microchip prefilled with sieving matrix is aligned on a heating plate affixed to a movable platform. Samples are loaded into sample reservoirs by using an eight-tip pipetting device, and the chip is docked with an array of electrodes in the focal plane of a four-color scanning detection system. Under computer control, high voltage is applied to the appropriate reservoirs in a programmed sequence that injects and separates the DNA samples. An integrated four-color confocal fluorescent detector automatically scans all 16 channels. The system routinely yields more than 450 bases in 15 min in all 16 channels. In the best case using an automated base-calling program, 543 bases have been called at an accuracy of >99%. Separations, including automated chip loading and sample injection, normally are completed in less than 18 min. The advantages of DNA sequencing on capillary electrophoresis chips include uniform signal intensity and tolerance of high DNA template concentration. To understand the fundamentals of these unique features we developed a theoretical treatment of cross-channel chip injection that we call the differential concentration effect. We present experimental evidence consistent with the predictions of the theory.
Resumo:
We describe a procedure for the generation of chemically accurate computer-simulation models to study chemical reactions in the condensed phase. The process involves (i) the use of a coupled semiempirical quantum and classical molecular mechanics method to represent solutes and solvent, respectively; (ii) the optimization of semiempirical quantum mechanics (QM) parameters to produce a computationally efficient and chemically accurate QM model; (iii) the calibration of a quantum/classical microsolvation model using ab initio quantum theory; and (iv) the use of statistical mechanical principles and methods to simulate, on massively parallel computers, the thermodynamic properties of chemical reactions in aqueous solution. The utility of this process is demonstrated by the calculation of the enthalpy of reaction in vacuum and free energy change in aqueous solution for a proton transfer involving methanol, methoxide, imidazole, and imidazolium, which are functional groups involved with proton transfers in many biochemical systems. An optimized semiempirical QM model is produced, which results in the calculation of heats of formation of the above chemical species to within 1.0 kcal/mol (1 kcal = 4.18 kJ) of experimental values. The use of the calibrated QM and microsolvation QM/MM (molecular mechanics) models for the simulation of a proton transfer in aqueous solution gives a calculated free energy that is within 1.0 kcal/mol (12.2 calculated vs. 12.8 experimental) of a value estimated from experimental pKa values of the reacting species.
Resumo:
We describe the hardwired implementation of algorithms for Monte Carlo simulations of a large class of spin models. We have implemented these algorithms as VHDL codes and we have mapped them onto a dedicated processor based on a large FPGA device. The measured performance on one such processor is comparable to O(100) carefully programmed high-end PCs: it turns out to be even better for some selected spin models. We describe here codes that we are currently executing on the IANUS massively parallel FPGA-based system.
Resumo:
Dedicated machines designed for specific computational algorithms can outperform conventional computers by several orders of magnitude. In this note we describe Ianus, a new generation FPGA based machine and its basic features: hardware integration and wide reprogrammability. Our goal is to build a machine that can fully exploit the performance potential of new generation FPGA devices. We also plan a software platform which simplifies its programming, in order to extend its intended range of application to a wide class of interesting and computationally demanding problems. The decision to develop a dedicated processor is a complex one, involving careful assessment of its performance lead, during its expected lifetime, over traditional computers, taking into account their performance increase, as predicted by Moore’s law. We discuss this point in detail.
Resumo:
A parallel algorithm for image noise removal is proposed. The algorithm is based on peer group concept and uses a fuzzy metric. An optimization study on the use of the CUDA platform to remove impulsive noise using this algorithm is presented. Moreover, an implementation of the algorithm on multi-core platforms using OpenMP is presented. Performance is evaluated in terms of execution time and a comparison of the implementation parallelised in multi-core, GPUs and the combination of both is conducted. A performance analysis with large images is conducted in order to identify the amount of pixels to allocate in the CPU and GPU. The observed time shows that both devices must have work to do, leaving the most to the GPU. Results show that parallel implementations of denoising filters on GPUs and multi-cores are very advisable, and they open the door to use such algorithms for real-time processing.