82 resultados para Application Provider
Resumo:
Glass fibre-reinforced plastics (GFRP), nowadays commonly used in the construction, transportation and automobile sectors, have been considered inherently difficult to recycle due to both: cross-linked nature of thermoset resins, which cannot be remolded, and complex composition of the composite itself, which includes glass fibres, matrix and different types of inorganic fillers. Presently, most of the GFRP waste is landfilled leading to negative environmental impacts and supplementary added costs. With an increasing awareness of environmental matters and the subsequent desire to save resources, recycling would convert an expensive waste disposal into a profitable reusable material. There are several methods to recycle GFR thermostable materials: (a) incineration, with partial energy recovery due to the heat generated during organic part combustion; (b) thermal and/or chemical recycling, such as solvolysis, pyrolisis and similar thermal decomposition processes, with glass fibre recovering; and (c) mechanical recycling or size reduction, in which the material is subjected to a milling process in order to obtain a specific grain size that makes the material suitable as reinforcement in new formulations. This last method has important advantages over the previous ones: there is no atmospheric pollution by gas emission, a much simpler equipment is required as compared with ovens necessary for thermal recycling processes, and does not require the use of chemical solvents with subsequent environmental impacts. In this study the effect of incorporation of recycled GFRP waste materials, obtained by means of milling processes, on mechanical behavior of polyester polymer mortars was assessed. For this purpose, different contents of recycled GFRP waste materials, with distinct size gradings, were incorporated into polyester polymer mortars as sand aggregates and filler replacements. The effect of GFRP waste treatment with silane coupling agent was also assessed. Design of experiments and data treatment were accomplish by means of factorial design and analysis of variance ANOVA. The use of factorial experiment design, instead of the one-factor-at-a-time method is efficient at allowing the evaluation of the effects and possible interactions of the different material factors involved. Experimental results were promising toward the recyclability of GFRP waste materials as aggregates and filler replacements for polymer mortar, with significant gain of mechanical properties with regard to non-modified polymer mortars.
Resumo:
The World Business Council for Sustainable Development (WBCSD) defines Eco-Efficiency as follows: ‘Eco- Efficiency is achieved by the delivery of competitively priced-goods and services that satisfy human needs and bring quality of life, while progressively reducing ecological impacts and resource intensity throughout the life-cycle to a level at least in line with the earth’s estimated carrying capacity’. Eco-Efficiency is under this point of view a key concept for sustainable development, bringing together economic and ecological progress. Measuring the Eco-Efficiency of a company, factory or business, is a complex process that involves the measurement and control of several and relevant parameters or indicators, globally applied to all companies in general, or specific according to the nature and specificities of the business itself. In this study, an attempt was made in order to measure and evaluate the eco-efficiency of a pultruded composite processing company. For this purpose the recommendations of WBCSD [1] and the directives of ISO 14301 standard [2] were followed and applied. The analysis was restricted to the main business branch of the company: the production and sale of standard GFRP pultrusion profiles. The main general indicators of eco-efficiency, as well as the specific indicators, were defined and determined according to ISO 14031 recommendations. With basis on indicators’ figures, the value profile, the environmental profile, and the pertinent eco-efficiency’s ratios were established and analyzed. In order to evaluate potential improvements on company eco-performance, new indicators values and ecoefficiency ratios were estimated taking into account the implementation of new proceedings and procedures, both in upstream and downstream of the production process, namely: a) Adoption of new heating system for pultrusion die in the manufacturing process, more effective and with minor heat losses; b) Implementation of new software for stock management (raw materials and final products) that minimize production failures and delivery delays to final consumer; c) Recycling approach, with partial waste reuse of scrap material derived from manufacturing, cutting and assembly processes of GFRP profiles. In particular, the last approach seems to significantly improve the eco-efficient performance of the company. Currently, by-products and wastes generated in the manufacturing process of GFRP profiles are landfilled, with supplementary added costs to this company traduced by transport of scrap, landfill taxes and required test analysis to waste materials. However, mechanical recycling of GFRP waste materials, with reduction to powdered and fibrous particulates, constitutes a recycling process that can be easily attained on heavy-duty cutting mills. The posterior reuse of obtained recyclates, either into a close-looping process, as filler replacement of resin matrix of GFRP profiles, or as reinforcement of other composite materials produced by the company, will drive to both costs reduction in raw materials and landfill process, and minimization of waste landfill. These features lead to significant improvements on the sequent assessed eco-efficiency ratios of the present case study, yielding to a more sustainable product and manufacturing process of pultruded GFRP profiles.
Resumo:
The goal of the work presented in this paper is to provide mobile platforms within our campus with a GPS based data service capable of supporting precise outdoor navigation. This can be achieved by providing campus-wide access to real time Differential GPS (DGPS) data. As a result, we designed and implemented a three-tier distributed system that provides Internet data links between remote DGPS sources and the campus and a campus-wide DGPS data dissemination service. The Internet data link service is a two-tier client/server where the server-side is connected to the DGPS station and the client-side is located at the campus. The campus-wide DGPS data provider disseminates the DGPS data received at the campus via the campus Intranet and via a wireless data link. The wireless broadcast is intended for portable receivers equipped with a DGPS wireless interface and the Intranet link is provided for receivers with a DGPS serial interface. The application is expected to provide adequate support for accurate outdoor campus navigation tasks.
Resumo:
Generally, the evolution process of applications has impact on their underlining data models, thus becoming a time-consuming problem for programmers and database administrators. In this paper we address this problem within an aspect-oriented approach, which is based on a meta-model for orthogonal persistent programming systems. Applying reflection techniques, our meta-model aims to be simpler than its competitors. Furthermore, it enables database multi-version schemas. We also discuss two case studies in order to demonstrate the advantages of our approach.
Resumo:
The need for better adaptation of networks to transported flows has led to research on new approaches such as content aware networks and network aware applications. In parallel, recent developments of multimedia and content oriented services and applications such as IPTV, video streaming, video on demand, and Internet TV reinforced interest in multicast technologies. IP multicast has not been widely deployed due to interdomain and QoS support problems; therefore, alternative solutions have been investigated. This article proposes a management driven hybrid multicast solution that is multi-domain and media oriented, and combines overlay multicast, IP multicast, and P2P. The architecture is developed in a content aware network and network aware application environment, based on light network virtualization. The multicast trees can be seen as parallel virtual content aware networks, spanning a single or multiple IP domains, customized to the type of content to be transported while fulfilling the quality of service requirements of the service provider.
Resumo:
Mestrado em Engenharia Computação e Instrumentação Médica
Resumo:
Lunacloud is a cloud service provider with offices in Portugal, Spain, France and UK that focus on delivering reliable, elastic and low cost cloud Infrastructure as a Service (IaaS) solutions. The company currently relies on a proprietary IaaS platform - the Parallels Automation for Cloud Infrastructure (PACI) - and wishes to expand and integrate other IaaS solutions seamlessly, namely open source solutions. This is the challenge addressed in this thesis. This proposal, which was fostered by Eurocloud Portugal Association, contributes to the promotion of interoperability and standardisation in Cloud Computing. The goal is to investigate, propose and develop an interoperable open source solution with standard interfaces for the integrated management of IaaS Cloud Computing resources based on new as well as existing abstraction libraries or frameworks. The solution should provide bothWeb and application programming interfaces. The research conducted consisted of two surveys covering existing open source IaaS platforms and PACI (features and API) and open source IaaS abstraction solutions. The first study was focussed on the characteristics of most popular open source IaaS platforms, namely OpenNebula, OpenStack, CloudStack and Eucalyptus, as well as PACI and included a thorough inventory of the provided Application Programming Interfaces (API), i.e., offered operations, followed by a comparison of these platforms in order to establish their similarities and dissimilarities. The second study on existing open source interoperability solutions included the analysis of existing abstraction libraries and frameworks and their comparison. The approach proposed and adopted, which was supported on the conclusions of the carried surveys, reuses an existing open source abstraction solution – the Apache Deltacloud framework. Deltacloud relies on the development of software driver modules to interface with different IaaS platforms, officially provides and supports drivers to sixteen IaaS platform, including OpenNebula and OpenStack, and allows the development of new provider drivers. The latter functionality was used to develop a new Deltacloud driver for PACI. Furthermore, Deltacloud provides a Web dashboard and REpresentational State Transfer (REST) API interfaces. To evaluate the adopted solution, a test bed integrating OpenNebula, Open- Stack and PACI nodes was assembled and deployed. The tests conducted involved time elapsed and data payload measurements via the Deltacloud framework as well as via the pre-existing IaaS platform API. The Deltacloud framework behaved as expected, i.e., introduced additional delays, but no substantial overheads. Both the Web and the REST interfaces were tested and showed identical measurements. The developed interoperable solution for the seamless integration and provision of IaaS resources from PACI, OpenNebula and OpenStack IaaS platforms fulfils the specified requirements, i.e., provides Lunacloud with the ability to expand the range of adopted IaaS platforms and offers a Web dashboard and REST API for the integrated management. The contributions of this work include the surveys and comparisons made, the selection of the abstraction framework and, last, but not the least, the PACI driver developed.
Resumo:
The non-technical loss is not a problem with trivial solution or regional character and its minimization represents the guarantee of investments in product quality and maintenance of power systems, introduced by a competitive environment after the period of privatization in the national scene. In this paper, we show how to improve the training phase of a neural network-based classifier using a recently proposed meta-heuristic technique called Charged System Search, which is based on the interactions between electrically charged particles. The experiments were carried out in the context of non-technical loss in power distribution systems in a dataset obtained from a Brazilian electrical power company, and have demonstrated the robustness of the proposed technique against with several others natureinspired optimization techniques for training neural networks. Thus, it is possible to improve some applications on Smart Grids.
Resumo:
This paper presents a modified Particle Swarm Optimization (PSO) methodology to solve the problem of energy resources management with high penetration of distributed generation and Electric Vehicles (EVs) with gridable capability (V2G). The objective of the day-ahead scheduling problem in this work is to minimize operation costs, namely energy costs, regarding the management of these resources in the smart grid context. The modifications applied to the PSO aimed to improve its adequacy to solve the mentioned problem. The proposed Application Specific Modified Particle Swarm Optimization (ASMPSO) includes an intelligent mechanism to adjust velocity limits during the search process, as well as self-parameterization of PSO parameters making it more user-independent. It presents better robustness and convergence characteristics compared with the tested PSO variants as well as better constraint handling. This enables its use for addressing real world large-scale problems in much shorter times than the deterministic methods, providing system operators with adequate decision support and achieving efficient resource scheduling, even when a significant number of alternative scenarios should be considered. The paper includes two realistic case studies with different penetration of gridable vehicles (1000 and 2000). The proposed methodology is about 2600 times faster than Mixed-Integer Non-Linear Programming (MINLP) reference technique, reducing the time required from 25 h to 36 s for the scenario with 2000 vehicles, with about one percent of difference in the objective function cost value.
Resumo:
Demand response programs and models have been developed and implemented for an improved performance of electricity markets, taking full advantage of smart grids. Studying and addressing the consumers’ flexibility and network operation scenarios makes possible to design improved demand response models and programs. The methodology proposed in the present paper aims to address the definition of demand response programs that consider the demand shifting between periods, regarding the occurrence of multi-period demand response events. The optimization model focuses on minimizing the network and resources operation costs for a Virtual Power Player. Quantum Particle Swarm Optimization has been used in order to obtain the solutions for the optimization model that is applied to a large set of operation scenarios. The implemented case study illustrates the use of the proposed methodology to support the decisions of the Virtual Power Player in what concerns the duration of each demand response event.
Resumo:
The Basel Committee on Banking Supervision (BCBS) introduced new regulations for banking supervision in December 2010, better known as Basel III recommendations that aimed at guaranteeing the solidity of banks worldwide and the mitigation of new banking crises risks. The European Union transposed these directives through the Credit Review Directives IV (CRD IV). Portugal adopted CRD IV by a new decree-law no. 157/2014, on 24 th October 2014, enforced from 24 th November 2014. While individual banks have been given the option of using the internal ratings based method, this study analyses the compliance levels of all Portuguese banking institutions using the standard method, also prescribed by BCBS. Our results show that out of thirteen banks on 31-12-2013 only five banks were in a comfortable position and the remaining eight could not reach the minimum requirements set up by BCBS for 1-1-2014.
Resumo:
Based on a literature review, this article frames different stages of the foster care process, identifying a set of standardized measures in the American and Portuguese contexts which, if implemented, could contribute towards higher levels of foster success. The article continues with the presentation of a comparative study, based on the application of the Casey Foster Applicant Inventory-Applicant Version (CFAI-A) questionnaire, in the aforementioned contexts. Taking a comparative analyses of CFAI-A's psychometric characteristics in four different samples as a starting point, one discovered that despite the fact that the questionnaire was adapted to Portuguese reality, it kept the quality values presented on the American samples. It specifically shows significant values regarding reliability and validity. This questionnaire, which aims to assess the potential of foster families, also supports the technical staff's decision making process regarding the monitoring and support of foster families, while it also promotes a better decision in the placement process towards the child's integration and development.
Resumo:
Many-core platforms are an emerging technology in the real-time embedded domain. These devices offer various options for power savings, cost reductions and contribute to the overall system flexibility, however, issues such as unpredictability, scalability and analysis pessimism are serious challenges to their integration into the aforementioned area. The focus of this work is on many-core platforms using a limited migrative model (LMM). LMM is an approach based on the fundamental concepts of the multi-kernel paradigm, which is a promising step towards scalable and predictable many-cores. In this work, we formulate the problem of real-time application mapping on a many-core platform using LMM, and propose a three-stage method to solve it. An extended version of the existing analysis is used to assure that derived mappings (i) guarantee the fulfilment of timing constraints posed on worst-case communication delays of individual applications, and (ii) provide an environment to perform load balancing for e.g. energy/thermal management, fault tolerance and/or performance reasons.