14 resultados para cost-per-wear model
em Instituto Politécnico do Porto, Portugal
Resumo:
Trabalho de Projeto apresentado ao Instituto de Contabilidade e Administração do Porto para a obtenção do grau de Mestre em Contabilidade e Finanças, sob orientação de Paulino Manuel Leite da Silva
Resumo:
A eficiência energética e a preocupação com a sustentabilidade têm vindo a ganhar preponderância na sociedade moderna. Este trabalho é uma contribuição para esta tendência onde se pretendeu avaliar e sugerir alterações ao sistema de climatização do edifício Biorama do Parque Biológico de Vila Nova de Gaia (PBG). Procedeu-se em primeiro lugar a uma caracterização física, química e geográfica dos 5 biomas constituintes do Biorama. Para isso, recorreu-se a documentos fornecidos pelo próprio PBG, visitas ao local e registo de medições de alguns parâmetros (temperatura, humidade relativa, qualidade do ar). Posteriormente foi realizado o balanço térmico dos edifícios, de acordo com a legislação em vigor, recorrendo a expressões e conceitos teóricos. Foram determinados valores dos ganhos térmicos de aquecimento de 15811, 10694, 7939, 9233, e 6621 kWh/ano para Floresta tropical, Mesozoico, Dunas, Savana e Deserto, respetivamente. Foram igualmente determinados valores dos ganhos térmicos no verão de 7093, 4798, 3560, 4144 e 2971 kWh na Floresta tropical, no Mesozoico, nas Dunas, na Savana e no Deserto, respetivamente. As cargas térmicas de aquecimento foram 149, 125, 47, 60 e 51 kW na Floresta tropical, no Mesozoico, nas Dunas, na Savana e no Deserto, respetivamente. As cargas térmicas de arrefecimento foram iguais a 59, 57, 47, 35 e 36 kW na Floresta tropical, no Mesozoico, nas Dunas, na Savana e no Deserto, respetivamente. Algumas soluções são avançadas, bem como alternativas comportamentais de modo a corrigir alguns problemas identificados. Uma proposta é a da instalação de painéis solares e acumuladores de calor, com os quais se estima um ganho médio conjunto de 500 W em cada bioma, e representam um investimento de 1050 euros e terão um retorno de 1 ano. Em relação à humidade é sugerido a utilização mais eficaz dos aspersores existentes e a utilização de esponjas, para fazer subir a humidade relativa para valores superiores a 80%. Em sentido inverso, no inverno, propõem-se a utilização de material higroscópico para fazer baixar a humidade relativa em cerca de 5%. Os custos com os suportes e o material higroscópico rondam os 250 €. Por fim, é sugerido a instalação de um aparelho de ar condicionado de 16 000 BTU no corredor de ligação, pois é a única forma de garantir condições de conforto térmico. Esta proposta de arrefecimento com ar condicionado e ainda o recurso a uma cortina de lâminas de plástico, que servem para efetuar uma separação mais eficiente entre ar frio e ar quente, têm um custo aproximado de 350 €. É ainda sugerida a utilização de lonas ou de uma planta trepadeira com um custo por planta de 5€, nas coberturas dos telhados virados a sul, sendo que a zona do corredor deverá ser totalmente coberta, a fim de evitar a exposição solar direta.
Resumo:
In the present paper we consider a differentiated Stackelberg model, when the leader firm engages in an R&D process that gives an endogenous cost-reducing innovation. The aim is to study the licensing of the cost-reduction by a per-unit royalty and a fixed-fee. We analyse the implications of these types of licensing contracts over the R&D effort, the profits of the firms, the consumer surplus and the social welfare. By using comparative static analysis, we conclude that the degree of the differentiation of the goods plays an important role in the results.
Resumo:
In this paper we consider a differentiated Stackelberg model, when the leader firm engages in an R&D process that gives an endogenous cost-reducing innovation. The aim is to study the licensing of the cost-reduction by a two-part tariff. By using comparative static analysis, we conclude that the degree of the differentiation of the goods plays an important role in the results. We also do a direct comparison between our model and Cournot duopoly model.
Resumo:
This paper presents a novel approach to WLAN propagation models for use in indoor localization. The major goal of this work is to eliminate the need for in situ data collection to generate the Fingerprinting map, instead, it is generated by using analytical propagation models such as: COST Multi-Wall, COST 231 average wall and Motley- Keenan. As Location Estimation Algorithms kNN (K-Nearest Neighbour) and WkNN (Weighted K-Nearest Neighbour) were used to determine the accuracy of the proposed technique. This work is based on analytical and measurement tools to determine which path loss propagation models are better for location estimation applications, based on Receive Signal Strength Indicator (RSSI).This study presents different proposals for choosing the most appropriate values for the models parameters, like obstacles attenuation and coefficients. Some adjustments to these models, particularly to Motley-Keenan, considering the thickness of walls, are proposed. The best found solution is based on the adjusted Motley-Keenan and COST models that allows to obtain the propagation loss estimation for several environments.Results obtained from two testing scenarios showed the reliability of the adjustments, providing smaller errors in the measured values values in comparison with the predicted values.
Resumo:
The high penetration of distributed energy resources (DER) in distribution networks and the competitiveenvironment of electricity markets impose the use of new approaches in several domains. The networkcost allocation, traditionally used in transmission networks, should be adapted and used in the distribu-tion networks considering the specifications of the connected resources. The main goal is to develop afairer methodology trying to distribute the distribution network use costs to all players which are usingthe network in each period. In this paper, a model considering different type of costs (fixed, losses, andcongestion costs) is proposed comprising the use of a large set of DER, namely distributed generation(DG), demand response (DR) of direct load control type, energy storage systems (ESS), and electric vehi-cles with capability of discharging energy to the network, which is known as vehicle-to-grid (V2G). Theproposed model includes three distinct phases of operation. The first phase of the model consists in aneconomic dispatch based on an AC optimal power flow (AC-OPF); in the second phase Kirschen’s andBialek’s tracing algorithms are used and compared to evaluate the impact of each resource in the net-work. Finally, the MW-mile method is used in the third phase of the proposed model. A distributionnetwork of 33 buses with large penetration of DER is used to illustrate the application of the proposedmodel.
Resumo:
The high penetration of distributed energy resources (DER) in distribution networks and the competitive environment of electricity markets impose the use of new approaches in several domains. The network cost allocation, traditionally used in transmission networks, should be adapted and used in the distribution networks considering the specifications of the connected resources. The main goal is to develop a fairer methodology trying to distribute the distribution network use costs to all players which are using the network in each period. In this paper, a model considering different type of costs (fixed, losses, and congestion costs) is proposed comprising the use of a large set of DER, namely distributed generation (DG), demand response (DR) of direct load control type, energy storage systems (ESS), and electric vehicles with capability of discharging energy to the network, which is known as vehicle-to-grid (V2G). The proposed model includes three distinct phases of operation. The first phase of the model consists in an economic dispatch based on an AC optimal power flow (AC-OPF); in the second phase Kirschen's and Bialek's tracing algorithms are used and compared to evaluate the impact of each resource in the network. Finally, the MW-mile method is used in the third phase of the proposed model. A distribution network of 33 buses with large penetration of DER is used to illustrate the application of the proposed model.
Resumo:
Electricity markets are complex environments, involving a large number of different entities, playing in a dynamic scene to obtain the best advantages and profits. MASCEM is a multi-agent electricity market simulator to model market players and simulate their operation in the market. Market players are entities with specific characteristics and objectives, making their decisions and interacting with other players. MASCEM provides several dynamic strategies for agents’ behavior. This paper presents a method that aims to provide market players with strategic bidding capabilities, allowing them to obtain the higher possible gains out of the market. This method uses a reinforcement learning algorithm to learn from experience how to choose the best from a set of possible bids. These bids are defined accordingly to the cost function that each producer presents.
Resumo:
The problem of selecting suppliers/partners is a crucial and important part in the process of decision making for companies that intend to perform competitively in their area of activity. The selection of supplier/partner is a time and resource-consuming task that involves data collection and a careful analysis of the factors that can positively or negatively influence the choice. Nevertheless it is a critical process that affects significantly the operational performance of each company. In this work, there were identified five broad selection criteria: Quality, Financial, Synergies, Cost, and Production System. Within these criteria, it was also included five sub-criteria. After the identification criteria, a survey was elaborated and companies were contacted in order to understand which factors have more weight in their decisions to choose the partners. Interpreted the results and processed the data, it was adopted a model of linear weighting to reflect the importance of each factor. The model has a hierarchical structure and can be applied with the Analytic Hierarchy Process (AHP) method or Value Analysis. The goal of the paper it's to supply a selection reference model that can represent an orientation/pattern for a decision making on the suppliers/partners selection process
Resumo:
We consider a Bertrand duopoly model with unknown costs. The firms' aim is to choose the price of its product according to the well-known concept of Bayesian Nash equilibrium. The chooses are made simultaneously by both firms. In this paper, we suppose that each firm has two different technologies, and uses one of them according to a certain probability distribution. The use of either one or the other technology affects the unitary production cost. We show that this game has exactly one Bayesian Nash equilibrium. We analyse the advantages, for firms and for consumers, of using the technology with highest production cost versus the one with cheapest production cost. We prove that the expected profit of each firm increases with the variance of its production costs. We also show that the expected price of each good increases with both expected production costs, being the effect of the expected production costs of the rival dominated by the effect of the own expected production costs.
Resumo:
Order picking consists in retrieving products from storage locations to satisfy independent orders from multiple customers. It is generally recognized as one of the most significant activities in a warehouse (Koster et al, 2007). In fact, order picking accounts up to 50% (Frazelle, 2001) or even 80% (Van den Berg, 1999) of the total warehouse operating costs. The critical issue in today’s business environment is to simultaneously reduce the cost and increase the speed of order picking. In this paper, we address the order picking process in one of the Portuguese largest companies in the grocery business. This problem was proposed at the 92nd European Study Group with Industry (ESGI92). In this setting, each operator steers a trolley on the shop floor in order to select items for multiple customers. The objective is to improve their grocery e-commerce and bring it up to the level of the best international practices. In particular, the company wants to improve the routing tasks in order to decrease distances. For this purpose, a mathematical model for a faster open shop picking was developed. In this paper, we describe the problem, our proposed solution as well as some preliminary results and conclusions.
Resumo:
As empresas nacionais deparam-se com a necessidade de responder ao mercado com uma grande variedade de produtos, pequenas séries e prazos de entrega reduzidos. A competitividade das empresas num mercado global depende assim da sua eficiência, da sua flexibilidade, da qualidade dos seus produtos e de custos reduzidos. Para se atingirem estes objetivos é necessário desenvolverem-se estratégias e planos de ação que envolvem os equipamentos produtivos, incluindo: a criação de novos equipamentos complexos e mais fiáveis, alteração dos equipamentos existentes modernizando-os de forma a responderem às necessidades atuais e a aumentar a sua disponibilidade e produtividade; e implementação de políticas de manutenção mais assertiva e focada no objetivo de “zero avarias”, como é o caso da manutenção preditiva. Neste contexto, o objetivo principal deste trabalho consiste na previsão do instante temporal ótimo da manutenção de um equipamento industrial – um refinador da fábrica de Mangualde da empresa Sonae Industria, que se encontra em funcionamento contínuo 24 horas por dia, 365 dias por ano. Para o efeito são utilizadas medidas de sensores que monitorizam continuamente o estado do refinador. A principal operação de manutenção deste equipamento é a substituição de dois discos metálicos do seu principal componente – o desfibrador. Consequentemente, o sensor do refinador analisado com maior detalhe é o sensor que mede a distância entre os dois discos do desfibrador. Os modelos ARIMA consistem numa abordagem estatística avançada para previsão de séries temporais. Baseados na descrição da autocorrelação dos dados, estes modelos descrevem uma série temporal como função dos seus valores passados. Neste trabalho, a metodologia ARIMA é utilizada para determinar um modelo que efetua uma previsão dos valores futuros do sensor que mede a distância entre os dois discos do desfibrador, determinando-se assim o momento ótimo da sua substituição e evitando paragens forçadas de produção por ocorrência de uma falha por desgaste dos discos. Os resultados obtidos neste trabalho constituem uma contribuição científica importante para a área da manutenção preditiva e deteção de falhas em equipamentos industriais.
Resumo:
Many-core platforms are an emerging technology in the real-time embedded domain. These devices offer various options for power savings, cost reductions and contribute to the overall system flexibility, however, issues such as unpredictability, scalability and analysis pessimism are serious challenges to their integration into the aforementioned area. The focus of this work is on many-core platforms using a limited migrative model (LMM). LMM is an approach based on the fundamental concepts of the multi-kernel paradigm, which is a promising step towards scalable and predictable many-cores. In this work, we formulate the problem of real-time application mapping on a many-core platform using LMM, and propose a three-stage method to solve it. An extended version of the existing analysis is used to assure that derived mappings (i) guarantee the fulfilment of timing constraints posed on worst-case communication delays of individual applications, and (ii) provide an environment to perform load balancing for e.g. energy/thermal management, fault tolerance and/or performance reasons.
Resumo:
A low-cost disposable was developed for rapid detection of the protein biomarker myoglobin (Myo) as a model analyte. A screen printed electrode was modified with a molecularly imprinted material grafted on a graphite support and incorporated in a matrix composed of poly(vinyl chloride) and the plasticizer o-nitrophenyloctyl ether. The protein-imprinted material (PIM) was produced by growing a reticulated polymer around a protein template. This is followed by radical polymerization of 4-styrenesulfonic acid, 2-aminoethyl methacrylate hydrochloride, and ethylene glycol dimethacrylate. The polymeric layer was then covalently bound to the graphitic support, and Myo was added during the imprinting stage to act as a template. Non-imprinted control materials (CM) were also prepared by omitting the Myo template. Morphological and structural analysis of PIM and CM by FTIR, Raman, and SEM/EDC microscopies confirmed the modification of the graphite support. The analytical performance of the SPE was assessed by square wave voltammetry. The average limit of detection is 0.79 μg of Myo per mL, and the slope is −0.193 ± 0.006 μA per decade. The SPE-CM cannot detect such low levels of Myo but gives a linear response at above 7.2 μg · mL−1, with a slope of −0.719 ± 0.02 μA per decade. Interference studies with hemoglobin, bovine serum albumin, creatinine, and sodium chloride demonstrated good selectivity for Myo. The method was successfully applied to the determination of Myo urine and is conceived to be a promising tool for screening Myo in point-of-care patients with ischemia.