866 resultados para 2201 Applied Ethics
Resumo:
O principal objectivo desta tese é obter uma relação directa entre a composição dos gases liquefeitos de petróleo (GLP), propano, n-butano e isobutano, usados como aerossóis propulsores numa lata de poliuretano de um componente, com as propriedades das espumas produzidas por spray. As espumas obtidas, terão de ter como requisito principal, um bom desempenho a temperaturas baixas, -10ºC, sendo por isso designadas por espumas de Inverno. Uma espuma é considerada como tendo um bom desempenho se não apresentar a -10/-10ºC (temperatura lata/ spray) glass bubbles, base holes e cell collapse. As espumas deverão ainda ter densidades do spray no molde a +23/+23ºC abaixo dos 30 g/L, um rendimento superior a 30 L, boa estabilidade dimensional e um caudal de espuma a +5/+5ºC superior a 5 g/s. Os ensaios experimentais foram realizados a +23/+23ºC, +5/+5ºC e a -10/-10ºC. A cada temperatura, as espumas desenvolvidas, foram submetidas a testes que permitiram determinar a sua qualidade. Testes esses que incluem os designados por Quick Tests (QT): o spray no papel e no molde das espumas nas referidas temperaturas. As amostras do papel e no molde são especialmente analisadas, quanto, às glass bubbles, cell collapse, base holes, cell structur e, cutting shrinkage, para além de outras propriedades. Os QT também incluem a análise da densidade no molde (ODM) e o estudo do caudal de espumas. Além dos QT foram realizados os testes da estabilidade dimensional das espumas, testes físicos de compressão e adesão, testes de expansão das espumas após spray e do rendimento por lata de espuma. Em todos os ensaios foi utilizado um tubo adaptador colocado na válvula da lata como método de spray e ainda mantida constante a proporção das matérias-primas (excepto os gases, em estudo). As experiências iniciaram-se com o estudo de GLPs presentes no mercado de aerossóis. Estes resultaram que o GLP: propano/ n-butano/ isobutano: (30/ 0/ 70 w/w%), produz as melhores espumas de inverno a -10/-10ºC, reduzindo desta forma as glass bubbles, base holes e o cell collapse produzido pelos restantes GLP usados como aerossóis nas latas de poliuretano. Testes posteriores tiveram como objectivo estudar a influência directa de cada gás, propano, n-butano e isobutano nas espumas. Para tal, foram usadas duas referências do estudo com GLP comercializáveis, 7396 (30 /0 /70 w/w %) e 7442 (0/ 0/ 100 w/w %). Com estes resultados concluí-se que o n-butano produz más propriedades nas espumas a -10/- 10ºC, formando grandes quantidades de glass bubbles, base holes e cell collapse. Contudo, o uso de propano reduz essas glass bubbles, mas em contrapartida, forma cell collapse.Isobutano, porém diminui o cell collapse mas não as glass bubbles. Dos resultados experimentais podemos constatar que o caudal a +5/+5ºC e densidade das espumas a +23/+23ºC, são influenciados pela composição do GLP. O propano e n-butano aumentam o caudal de espuma das latas e a sua densidade, ao contrário com o que acontece com o isobutano. Todavia, pelos resultados obtidos, o isobutano proporciona os melhores rendimentos de espumas por lata. Podemos concluir que os GLPs que contivessem cerca de 30 w/w % de propano (bons caudais a +5/+5ºC e menos glass bubbles a -10/-10ºC), e cerca 70 w/w % de isobutano (bons rendimentos de espumas, bem como menos cell collapse a -10/-10ºC) produziam as melhores espumas. Também foram desenvolvidos testes sobre a influência da quantidade de gás GLP presente numa lata. A análise do volume de GLP usado, foi realizada com base na melhor espuma obtida nos estudos anteriores, 7396, com um GLP (30 / 0/ 70 w/w%), e foram feitas alterações ao seu volume gás GLP presente no pré-polímero. O estudo concluiu, que o aumento do volume pode diminuir a densidade das espumas, e o seu decréscimo, um aumento da densidade. Também indico u que um mau ajuste do volume poderá causar más propriedades nas espumas. A análise económica, concluiu que o custo das espumas com mais GLP nas suas formulações, reduz-se em cerca de 3%, a quando de um aumento do volume de GLP no pré-polímero de cerca de 8 %. Esta diminuição de custos deveu-se ao facto, de um aumento de volume de gás, implicar uma diminuição na quantidade das restantes matérias-primas, com custos superiores, já que o volume útil total da lata terá de ser sempre mantido nos 750 mL. Com o objectivo de melhorar a qualidade da espuma 7396 (30/0/70 w/w %) obtida nos ensaios anteriores adicionou-se à formulação 7396 o HFC-152a (1,1-di fluoroetano). Os resultados demonstram que se formam espumas com más propriedades, especialmente a -10/-10ºC, contudo proporcionou excelentes shaking rate da lata. Através de uma pequena análise de custos não é aconselhável o seu uso pelos resultados obtidos, não proporcionando um balanço custo/benefício favorável. As três melhores espumas obtidas de todos os estudos foram comparadas com uma espuma de inverno presente no mercado. 7396 e 7638 com um volume de 27 % no prépolímero e uma composição de GLP (30/ 0 / 70 w/w%) e (13,7/ 0/ 86,3 w/w%), respectivamente, e 7690, com 37 % de volume no pré-polímero e GLP (30/ 0 / 70 w/w%), apresentaram em geral melhores resultados, comparando com a espuma benchmark . Contudo, os seus shaking rate a -10/-10ºC, de cada espuma, apresentaram valores bastante inferiores à composição benchmarking.
Resumo:
This paper concerns the study of biocides application in old timber structures of maritime pine (Pinus pinaster Ail.), previously impregnated with other products. A method was developed in laboratory to determine in situ the penetration depth of a product applied superficially. As initial treatment, three traditional products for sawn timber for buildings were used and, for new treatments, two newer, more environmentally benign products were used. Their ability to penetrate the pre-treated surfaces was evaluated after 1, 2 and 3 applications at 24 hours intervals and the results obtained are presented. Finally, the applicability of the developed test to the in-situ evaluation of timber structures is also discussed.
Resumo:
Although stock prices fluctuate, the variations are relatively small and are frequently assumed to be normal distributed on a large time scale. But sometimes these fluctuations can become determinant, especially when unforeseen large drops in asset prices are observed that could result in huge losses or even in market crashes. The evidence shows that these events happen far more often than would be expected under the generalized assumption of normal distributed financial returns. Thus it is crucial to properly model the distribution tails so as to be able to predict the frequency and magnitude of extreme stock price returns. In this paper we follow the approach suggested by McNeil and Frey (2000) and combine the GARCH-type models with the Extreme Value Theory (EVT) to estimate the tails of three financial index returns DJI,FTSE 100 and NIKKEI 225 representing three important financial areas in the world. Our results indicate that EVT-based conditional quantile estimates are much more accurate than those from conventional AR-GARCH models assuming normal or Student’s t-distribution innovations when doing out-of-sample estimation (within the insample estimation, this is so for the right tail of the distribution of returns).
Resumo:
Facing the lateral vibration problem of a machine rotor as a beam on elastic supports in bending, the authors deal with the free vibration of elastically restrained Bernoulli-Euler beams carrying a finite number of concentrated elements along their length. Based on Rayleigh's quotient, an iterative strategy is developed to find the approximated torsional stiffness coefficients, which allows the reconciliation between the theoretical model results and the experimental ones, obtained through impact tests. The mentioned algorithm treats the vibration of continuous beams under a determined set of boundary and continuity conditions, including different torsional stiffness coefficients and the effect of attached concentrated masses and rotational inertias, not only in the energetic terms of the Rayleigh's quotient but also on the mode shapes, considering the shape functions defined in branches. Several loading cases are examined and examples are given to illustrate the validity of the model and accuracy of the obtained natural frequencies.
Resumo:
Long-term contractual decisions are the basis of an efficient risk management. However those types of decisions have to be supported with a robust price forecast methodology. This paper reports a different approach for long-term price forecast which tries to give answers to that need. Making use of regression models, the proposed methodology has as main objective to find the maximum and a minimum Market Clearing Price (MCP) for a specific programming period, and with a desired confidence level α. Due to the problem complexity, the meta-heuristic Particle Swarm Optimization (PSO) was used to find the best regression parameters and the results compared with the obtained by using a Genetic Algorithm (GA). To validate these models, results from realistic data are presented and discussed in detail.
Resumo:
Short-term risk management is highly dependent on long-term contractual decisions previously established; risk aversion factor of the agent and short-term price forecast accuracy. Trying to give answers to that problem, this paper provides a different approach for short-term risk management on electricity markets. Based on long-term contractual decisions and making use of a price range forecast method developed by the authors, the short-term risk management tool presented here has as main concern to find the optimal spot market strategies that a producer should have for a specific day in function of his risk aversion factor, with the objective to maximize the profits and simultaneously to practice the hedge against price market volatility. Due to the complexity of the optimization problem, the authors make use of Particle Swarm Optimization (PSO) to find the optimal solution. Results from realistic data, namely from OMEL electricity market, are presented and discussed in detail.
Resumo:
In recent years the use of several new resources in power systems, such as distributed generation, demand response and more recently electric vehicles, has significantly increased. Power systems aim at lowering operational costs, requiring an adequate energy resources management. In this context, load consumption management plays an important role, being necessary to use optimization strategies to adjust the consumption to the supply profile. These optimization strategies can be integrated in demand response programs. The control of the energy consumption of an intelligent house has the objective of optimizing the load consumption. This paper presents a genetic algorithm approach to manage the consumption of a residential house making use of a SCADA system developed by the authors. Consumption management is done reducing or curtailing loads to keep the power consumption in, or below, a specified energy consumption limit. This limit is determined according to the consumer strategy and taking into account the renewable based micro generation, energy price, supplier solicitations, and consumers’ preferences. The proposed approach is compared with a mixed integer non-linear approach.
Resumo:
The concept of demand response has a growing importance in the context of the future power systems. Demand response can be seen as a resource like distributed generation, storage, electric vehicles, etc. All these resources require the existence of an infrastructure able to give players the means to operate and use them in an efficient way. This infrastructure implements in practice the smart grid concept, and should accommodate a large number of diverse types of players in the context of a competitive business environment. In this paper, demand response is optimally scheduled jointly with other resources such as distributed generation units and the energy provided by the electricity market, minimizing the operation costs from the point of view of a virtual power player, who manages these resources and supplies the aggregated consumers. The optimal schedule is obtained using two approaches based on particle swarm optimization (with and without mutation) which are compared with a deterministic approach that is used as a reference methodology. A case study with two scenarios implemented in DemSi, a demand Response simulator developed by the authors, evidences the advantages of the use of the proposed particle swarm approaches.
Resumo:
This paper proposes a swarm intelligence long-term hedging tool to support electricity producers in competitive electricity markets. This tool investigates the long-term hedging opportunities available to electric power producers through the use of contracts with physical (spot and forward) and financial (options) settlement. To find the optimal portfolio the producer risk preference is stated by a utility function (U) expressing the trade-off between the expectation and the variance of the return. Variance estimation and the expected return are based on a forecasted scenario interval determined by a long-term price range forecast model, developed by the authors, whose explanation is outside the scope of this paper. The proposed tool makes use of Particle Swarm Optimization (PSO) and its performance has been evaluated by comparing it with a Genetic Algorithm (GA) based approach. To validate the risk management tool a case study, using real price historical data for mainland Spanish market, is presented to demonstrate the effectiveness of the proposed methodology.
Resumo:
This paper aims to study the relationships between chromosomal DNA sequences of twenty species. We propose a methodology combining DNA-based word frequency histograms, correlation methods, and an MDS technique to visualize structural information underlying chromosomes (CRs) and species. Four statistical measures are tested (Minkowski, Cosine, Pearson product-moment, and Kendall τ rank correlations) to analyze the information content of 421 nuclear CRs from twenty species. The proposed methodology is built on mathematical tools and allows the analysis and visualization of very large amounts of stream data, like DNA sequences, with almost no assumptions other than the predefined DNA “word length.” This methodology is able to produce comprehensible three-dimensional visualizations of CR clustering and related spatial and structural patterns. The results of the four test correlation scenarios show that the high-level information clusterings produced by the MDS tool are qualitatively similar, with small variations due to each correlation method characteristics, and that the clusterings are a consequence of the input data and not method’s artifacts.
Resumo:
This paper proposes two meta-heuristics (Genetic Algorithm and Evolutionary Particle Swarm Optimization) for solving a 15 bid-based case of Ancillary Services Dispatch in an Electricity Market. A Linear Programming approach is also included for comparison purposes. A test case based on the dispatch of Regulation Down, Regulation Up, Spinning Reserve and Non-Spinning Reserve services is used to demonstrate that the use of meta-heuristics is suitable for solving this kind of optimization problem. Faster execution times and lower computational resources requirements are the most relevant advantages of the used meta-heuristics when compared with the Linear Programming approach.
Resumo:
This essay suggests that the intersubjectivity in translation should be given priority because different stages of the translation activity have different subjects, and presents a practical intersubjective ethics of translation based on an interpretation of the intersubjective relations connected with translation activities in a perspective of game theory in the hope that it can equip us with better explanations of the translator’s calculations or considerations in the professional practice.
Resumo:
Introduction: Image resizing is a normal feature incorporated into the Nuclear Medicine digital imaging. Upsampling is done by manufacturers to adequately fit more the acquired images on the display screen and it is applied when there is a need to increase - or decrease - the total number of pixels. This paper pretends to compare the “hqnx” and the “nxSaI” magnification algorithms with two interpolation algorithms – “nearest neighbor” and “bicubic interpolation” – in the image upsampling operations. Material and Methods: Three distinct Nuclear Medicine images were enlarged 2 and 4 times with the different digital image resizing algorithms (nearest neighbor, bicubic interpolation nxSaI and hqnx). To evaluate the pixel’s changes between the different output images, 3D whole image plot profiles and surface plots were used as an addition to the visual approach in the 4x upsampled images. Results: In the 2x enlarged images the visual differences were not so noteworthy. Although, it was clearly noticed that bicubic interpolation presented the best results. In the 4x enlarged images the differences were significant, with the bicubic interpolated images presenting the best results. Hqnx resized images presented better quality than 4xSaI and nearest neighbor interpolated images, however, its intense “halo effect” affects greatly the definition and boundaries of the image contents. Conclusion: The hqnx and the nxSaI algorithms were designed for images with clear edges and so its use in Nuclear Medicine images is obviously inadequate. Bicubic interpolation seems, from the algorithms studied, the most suitable and its each day wider applications seem to show it, being assumed as a multi-image type efficient algorithm.
Resumo:
Conferência: 39th Annual Conference of the IEEE Industrial-Electronics-Society (IECON) - NOV 10-14, 2013
Resumo:
In life cycle impact assessment (LCIA) models, the sorption of the ionic fraction of dissociating organic chemicals is not adequately modeled because conventional non-polar partitioning models are applied. Therefore, high uncertainties are expected when modeling the mobility, as well as the bioavailability for uptake by exposed biota and degradation, of dissociating organic chemicals. Alternative regressions that account for the ionized fraction of a molecule to estimate fate parameters were applied to the USEtox model. The most sensitive model parameters in the estimation of ecotoxicological characterization factors (CFs) of micropollutants were evaluated by Monte Carlo analysis in both the default USEtox model and the alternative approach. Negligible differences of CFs values and 95% confidence limits between the two approaches were estimated for direct emissions to the freshwater compartment; however the default USEtox model overestimates CFs and the 95% confidence limits of basic compounds up to three orders and four orders of magnitude, respectively, relatively to the alternative approach for emissions to the agricultural soil compartment. For three emission scenarios, LCIA results show that the default USEtox model overestimates freshwater ecotoxicity impacts for the emission scenarios to agricultural soil by one order of magnitude, and larger confidence limits were estimated, relatively to the alternative approach.