716 resultados para Mathematical computing
Resumo:
Dynamically reconfigurable SRAM-based field-programmable gate arrays (FPGAs) enable the implementation of reconfigurable computing systems where several applications may be run simultaneously, sharing the available resources according to their own immediate functional requirements. To exclude malfunctioning due to faulty elements, the reliability of all FPGA resources must be guaranteed. Since resource allocation takes place asynchronously, an online structural test scheme is the only way of ensuring reliable system operation. On the other hand, this test scheme should not disturb the operation of the circuit, otherwise availability would be compromised. System performance is also influenced by the efficiency of the management strategies that must be able to dynamically allocate enough resources when requested by each application. As those resources are allocated and later released, many small free resource blocks are created, which are left unused due to performance and routing restrictions. To avoid wasting logic resources, the FPGA logic space must be defragmented regularly. This paper presents a non-intrusive active replication procedure that supports the proposed test methodology and the implementation of defragmentation strategies, assuring both the availability of resources and their perfect working condition, without disturbing system operation.
Resumo:
Para muitos, o ato de ensinar, era e continua a ser uma “arte”, em que os professores e os grandes mestres mais eficientes são aqueles que têm a capacidade e a arte de fazer passar as suas mensagens e conhecimentos, de forma simples e apelativa, independentemente da área de estudo. A informação relacionada com a aula, é cada vez mais digital, sendo importante, por parte dos docentes, o domínio de tecnologias de criação, organização e disponibilização de conteúdos. Essa partilha foi inicialmente possível pelas páginas Web e mais tarde pelas plataformas LMS (Learning Management System). Criar um Website era uma tarefa complicada quer ao nível do seu custo quer ao nível do domínio da tecnologia Web e era por vezes necessário contratar profissionais para o efeito. Surgiram então as CMS (Content Management System) que são tecnologias Open Source, que permitem a gestão de conteúdos. Neste sentido foi realizado um estudo com o objetivo de aferir sobre as competências dos professores no domínio da partilha de Gestão de Conteúdos Digitais. O presente estudo permitiu retirar conclusões sobre o potencial e aplicabilidade das CMS no ensino. O principal objetivo do presente estudo incidiu no potencial de distribuição e partilha de Recursos Educativos Digitais organizados sobre o ponto de vista pedagógico aos alunos. Foi ainda analisado e estudado o papel do Cloud Computing no processo de partilha colaborativa de documentos. Foi delineado como suporte à presente investigação um curso modelo que por sua vez foi implementado nas três principais CMS da atualidade e avaliado o potencial de cada uma neste contexto. Finalmente foram apresentadas as conclusões retiradas do presente estudo.
Resumo:
Empowered by virtualisation technology, cloud infrastructures enable the construction of flexi- ble and elastic computing environments, providing an opportunity for energy and resource cost optimisation while enhancing system availability and achieving high performance. A crucial re- quirement for effective consolidation is the ability to efficiently utilise system resources for high- availability computing and energy-efficiency optimisation to reduce operational costs and carbon footprints in the environment. Additionally, failures in highly networked computing systems can negatively impact system performance substantially, prohibiting the system from achieving its initial objectives. In this paper, we propose algorithms to dynamically construct and readjust vir- tual clusters to enable the execution of users’ jobs. Allied with an energy optimising mechanism to detect and mitigate energy inefficiencies, our decision-making algorithms leverage virtuali- sation tools to provide proactive fault-tolerance and energy-efficiency to virtual clusters. We conducted simulations by injecting random synthetic jobs and jobs using the latest version of the Google cloud tracelogs. The results indicate that our strategy improves the work per Joule ratio by approximately 12.9% and the working efficiency by almost 15.9% compared with other state-of-the-art algorithms.
Resumo:
Extracting the semantic relatedness of terms is an important topic in several areas, including data mining, information retrieval and web recommendation. This paper presents an approach for computing the semantic relatedness of terms using the knowledge base of DBpedia — a community effort to extract structured information from Wikipedia. Several approaches to extract semantic relatedness from Wikipedia using bag-of-words vector models are already available in the literature. The research presented in this paper explores a novel approach using paths on an ontological graph extracted from DBpedia. It is based on an algorithm for finding and weighting a collection of paths connecting concept nodes. This algorithm was implemented on a tool called Shakti that extract relevant ontological data for a given domain from DBpedia using its SPARQL endpoint. To validate the proposed approach Shakti was used to recommend web pages on a Portuguese social site related to alternative music and the results of that experiment are reported in this paper.
Resumo:
Pultrusion is an industrial process used to produce glass fibers reinforced polymers profiles. These materials are worldwide used when performing characteristics, such as great electrical and magnetic insulation, high strength to weight ratio, corrosion and weather resistance, long service life and minimal maintenance are required. In this study, we present the results of the modelling and simulation of heat flow through a pultrusion die by means of Finite Element Analysis (FEA). The numerical simulation was calibrated based on temperature profiles computed from thermographic measurements carried out during pultrusion manufacturing process. Obtained results have shown a maximum deviation of 7%, which is considered to be acceptable for this type of analysis, and is below to the 10% value, previously specified as maximum deviation. © 2011, Advanced Engineering Solutions.
Resumo:
In this paper we study a delay mathematical model for the dynamics of HIV in HIV-specific CD4 + T helper cells. We modify the model presented by Roy and Wodarz in 2012, where the HIV dynamics is studied, considering a single CD4 + T cell population. Non-specific helper cells are included as alternative target cell population, to account for macrophages and dendritic cells. In this paper, we include two types of delay: (1) a latent period between the time target cells are contacted by the virus particles and the time the virions enter the cells and; (2) virus production period for new virions to be produced within and released from the infected cells. We compute the reproduction number of the model, R0, and the local stability of the disease free equilibrium and of the endemic equilibrium. We find that for values of R0<1, the model approaches asymptotically the disease free equilibrium. For values of R0>1, the model approximates asymptotically the endemic equilibrium. We observe numerically the phenomenon of backward bifurcation for values of R0⪅1. This statement will be proved in future work. We also vary the values of the latent period and the production period of infected cells and free virus. We conclude that increasing these values translates in a decrease of the reproduction number. Thus, a good strategy to control the HIV virus should focus on drugs to prolong the latent period and/or slow down the virus production. These results suggest that the model is mathematically and epidemiologically well-posed.
Resumo:
Coevolution between two antagonistic species has been widely studied theoretically for both ecologically- and genetically-driven Red Queen dynamics. A typical outcome of these systems is an oscillatory behavior causing an endless series of one species adaptation and others counter-adaptation. More recently, a mathematical model combining a three-species food chain system with an adaptive dynamics approach revealed genetically driven chaotic Red Queen coevolution. In the present article, we analyze this mathematical model mainly focusing on the impact of species rates of evolution (mutation rates) in the dynamics. Firstly, we analytically proof the boundedness of the trajectories of the chaotic attractor. The complexity of the coupling between the dynamical variables is quantified using observability indices. By using symbolic dynamics theory, we quantify the complexity of genetically driven Red Queen chaos computing the topological entropy of existing one-dimensional iterated maps using Markov partitions. Co-dimensional two bifurcation diagrams are also built from the period ordering of the orbits of the maps. Then, we study the predictability of the Red Queen chaos, found in narrow regions of mutation rates. To extend the previous analyses, we also computed the likeliness of finding chaos in a given region of the parameter space varying other model parameters simultaneously. Such analyses allowed us to compute a mean predictability measure for the system in the explored region of the parameter space. We found that genetically driven Red Queen chaos, although being restricted to small regions of the analyzed parameter space, might be highly unpredictable.
Resumo:
Single processor architectures are unable to provide the required performance of high performance embedded systems. Parallel processing based on general-purpose processors can achieve these performances with a considerable increase of required resources. However, in many cases, simplified optimized parallel cores can be used instead of general-purpose processors achieving better performance at lower resource utilization. In this paper, we propose a configurable many-core architecture to serve as a co-processor for high-performance embedded computing on Field-Programmable Gate Arrays. The architecture consists of an array of configurable simple cores with support for floating-point operations interconnected with a configurable interconnection network. For each core it is possible to configure the size of the internal memory, the supported operations and number of interfacing ports. The architecture was tested in a ZYNQ-7020 FPGA in the execution of several parallel algorithms. The results show that the proposed many-core architecture achieves better performance than that achieved with a parallel generalpurpose processor and that up to 32 floating-point cores can be implemented in a ZYNQ-7020 SoC FPGA.
Resumo:
The rapidly increasing computing power, available storage and communication capabilities of mobile devices makes it possible to start processing and storing data locally, rather than offloading it to remote servers; allowing scenarios of mobile clouds without infrastructure dependency. We can now aim at connecting neighboring mobile devices, creating a local mobile cloud that provides storage and computing services on local generated data. In this paper, we describe an early overview of a distributed mobile system that allows accessing and processing of data distributed across mobile devices without an external communication infrastructure. Copyright © 2015 ICST.
Resumo:
Lunacloud is a cloud service provider with offices in Portugal, Spain, France and UK that focus on delivering reliable, elastic and low cost cloud Infrastructure as a Service (IaaS) solutions. The company currently relies on a proprietary IaaS platform - the Parallels Automation for Cloud Infrastructure (PACI) - and wishes to expand and integrate other IaaS solutions seamlessly, namely open source solutions. This is the challenge addressed in this thesis. This proposal, which was fostered by Eurocloud Portugal Association, contributes to the promotion of interoperability and standardisation in Cloud Computing. The goal is to investigate, propose and develop an interoperable open source solution with standard interfaces for the integrated management of IaaS Cloud Computing resources based on new as well as existing abstraction libraries or frameworks. The solution should provide bothWeb and application programming interfaces. The research conducted consisted of two surveys covering existing open source IaaS platforms and PACI (features and API) and open source IaaS abstraction solutions. The first study was focussed on the characteristics of most popular open source IaaS platforms, namely OpenNebula, OpenStack, CloudStack and Eucalyptus, as well as PACI and included a thorough inventory of the provided Application Programming Interfaces (API), i.e., offered operations, followed by a comparison of these platforms in order to establish their similarities and dissimilarities. The second study on existing open source interoperability solutions included the analysis of existing abstraction libraries and frameworks and their comparison. The approach proposed and adopted, which was supported on the conclusions of the carried surveys, reuses an existing open source abstraction solution – the Apache Deltacloud framework. Deltacloud relies on the development of software driver modules to interface with different IaaS platforms, officially provides and supports drivers to sixteen IaaS platform, including OpenNebula and OpenStack, and allows the development of new provider drivers. The latter functionality was used to develop a new Deltacloud driver for PACI. Furthermore, Deltacloud provides a Web dashboard and REpresentational State Transfer (REST) API interfaces. To evaluate the adopted solution, a test bed integrating OpenNebula, Open- Stack and PACI nodes was assembled and deployed. The tests conducted involved time elapsed and data payload measurements via the Deltacloud framework as well as via the pre-existing IaaS platform API. The Deltacloud framework behaved as expected, i.e., introduced additional delays, but no substantial overheads. Both the Web and the REST interfaces were tested and showed identical measurements. The developed interoperable solution for the seamless integration and provision of IaaS resources from PACI, OpenNebula and OpenStack IaaS platforms fulfils the specified requirements, i.e., provides Lunacloud with the ability to expand the range of adopted IaaS platforms and offers a Web dashboard and REST API for the integrated management. The contributions of this work include the surveys and comparisons made, the selection of the abstraction framework and, last, but not the least, the PACI driver developed.
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do Grau de Mestre em Engenharia Informática
Resumo:
Bulletin of the Malaysian Mathematical Sciences Society, 2, 34 (1),(2011), p. 79–85
Resumo:
Order picking consists in retrieving products from storage locations to satisfy independent orders from multiple customers. It is generally recognized as one of the most significant activities in a warehouse (Koster et al, 2007). In fact, order picking accounts up to 50% (Frazelle, 2001) or even 80% (Van den Berg, 1999) of the total warehouse operating costs. The critical issue in today’s business environment is to simultaneously reduce the cost and increase the speed of order picking. In this paper, we address the order picking process in one of the Portuguese largest companies in the grocery business. This problem was proposed at the 92nd European Study Group with Industry (ESGI92). In this setting, each operator steers a trolley on the shop floor in order to select items for multiple customers. The objective is to improve their grocery e-commerce and bring it up to the level of the best international practices. In particular, the company wants to improve the routing tasks in order to decrease distances. For this purpose, a mathematical model for a faster open shop picking was developed. In this paper, we describe the problem, our proposed solution as well as some preliminary results and conclusions.
Resumo:
In this talk, we discuss a scheduling problem that originated at TAP - Maintenance & Engineering - the maintenance, repair and overhaul organization of Portugal’s leading airline. In the repair process of aircrafts’ engines, the operations to be scheduled may be executed on a certain workstation by any processor of a given set, and the objective is to minimize the total weighted tardiness. A mixed integer linear programming formulation, based on the flexible job shop scheduling, is presented here, along with computational experiment on a real instance, provided by TAP-ME, from a regular working week. The model was also tested using benchmarking instances available in literature.
Resumo:
This paper presents a methodology for multi-objective day-ahead energy resource scheduling for smart grids considering intensive use of distributed generation and Vehicle- To-Grid (V2G). The main focus is the application of weighted Pareto to a multi-objective parallel particle swarm approach aiming to solve the dual-objective V2G scheduling: minimizing total operation costs and maximizing V2G income. A realistic mathematical formulation, considering the network constraints and V2G charging and discharging efficiencies is presented and parallel computing is applied to the Pareto weights. AC power flow calculation is included in the metaheuristics approach to allow taking into account the network constraints. A case study with a 33-bus distribution network and 1800 V2G resources is used to illustrate the performance of the proposed method.