963 resultados para recursive formulation
Resumo:
We propose new theoretical models, which generalize the classical Avrami-Nakamura models. These models are suitable to describe the kinetics of nucleation and growth in transient regime, and/or with overlapping of nucleation and growth. Simulations and predictions were performed for lithium disilicate based on data reported in the literature. One re-examined the limitations of the models currently used to interpret DTA or DSC results, and to extract the relevant kinetic parameters. Glasses and glass-ceramics with molar formulation 0.45SiO2? (0.45-x)MgO?xK2O?0.1(3CaO.P2O5) (0?x?0.090) were prepared, crystallized and studied as potential materials for biomedical applications. Substitution of K+ for Mg2+ were used to prevent devritification on cooling, to adjust the kinetics of crystallization and to modify the in vitro behaviour of resulting biomaterials. The crystallization of the glass frits was studied by DTA, XRD and SEM. Exothermic peaks were detected corresponding to bulk crystallization of whitlockite-type phosphate, Ca9MgK(PO4)7, at approximately 900ºC, and surface crystallization of a predominant forsterite phase (Mg2SiO4) at higher temperatures. XRD also revealed the presence of diopside (CaMgSi2O6 in some samples. The predominant microstructure of the phosphate phase is of the plate-type, seemingly crystallizing by a 2-dimensional growth mechanism. Impedance spectroscopy revealed significant changes in electrical behaviour, associated to crystallization of the phosphate phase. This showed that electrical measurements can be used to study the kinetics of crystallization for cases when DTA or DSC experiments reveal limitations, and to extract estimates of relevant parameters from the dependence of crystallization peak temperature, and its width at half height. In vitro studies of glasses and glass-ceramics in acelular SBF media showed bioactivity and the development of apatite layers The morphology, composition and adhesion of the apatite layer could be changed by substitution of Mg2+ by K+. Apatite layers were deposited on the surface of glass-ceramics of the nominal compositions with x=0 and 0.09, in contact with SBF at 37ºC. The adhesion of the apatite layer was quantified by the scratch test technique, having been related with SBF?s immersion time, with composition and structure of the glass phase, and with the morphology of the crystalline phase of the glass-ceramics. The structure of three glasses (x=0, 0.045 and 0.090) were investigated by MAS-NMR ( 29Si and 31P), showing that the fraction of Q3 structural units increases with the contents of Mg, and that the structure of these glasses includes orthophosphate groups (PO43-) preferentially connected to Ca2+ ions. Mg2+ ions show preference towards the silicate network. Substitution of Mg2+ by K+ allowed one to change the bioactivity. FTIR data revealed octacalcium phosphate precipitation (Ca8H2(PO4)6.5H2O) in the glass without K, while the morphology of the layer acquires the shape of partially superimposed hemispheres, spread over the surface. The glasses with K present a layer of acicular hidroxyapatite, whose crystallinity and needles thickness tend to increase along with K content.
Resumo:
In the past thirty years, a series of plans have been developed by successive Brazilian governments in a continuing effort to maximize the nation's resources for economic and social growth. This planning history has been quantitatively rich but qualitatively poor. The disjunction has stimulated Professor Mello e Souza to address himself to the problem of national planning and to offer some criticisms of Brazilian planning experience. Though political instability has obviously been a factor promoting discontinuity, his criticisms are aimed at the attitudes and strategic concepts which have sought to link planning to national goals and administration. He criticizes the fascination with techniques and plans to the exclusion of proper diagnosis of the socio-political reality, developing instruments to coordinate and carry out objectives, and creating an administrative structure centralized enough to make national decisions and decentralized enough to perform on the basis of those decisions. Thus, fixed, quantified objectives abound while the problem of functioning mechanisms for the coordinated, rational use of resources has been left unattended. Although his interest and criticism are focused on the process and experience of national planning, he recognized variation in the level and results of Brazilian planning. National plans have failed due to faulty conception of the function of planning. Sectorial plans, save in the sector of the petroleum industry under government responsibility, ha e not succeeded in overcoming the problems of formulation and execution thereby repeating old technical errors. Planning for the private sector has a somewhat brighter history due to the use of Grupos Executivos which has enabled the planning process to transcend the formalism and tradition-bound attitudes of the regular bureaucracy. Regional planning offers two relatively successful experiences, Sudene and the strategy of the regionally oriented autarchy. Thus, planning history in Brazil is not entirely black but a certain shade of grey. The major part of the article, however, is devoted to a descriptive analysis of the national planning experience. The plans included in this analysis are: The Works and Equipment Plan (POE); The Health, Food, Transportation and Energy Plan (Salte); The Program of Goals; The Trienal Plan of Economic and Social Development; and the Plan of Governmental Economic Action (Paeg). Using these five plans for his historical experience the author sets out a series of errors of formulation and execution by which he analyzes that experience. With respect to formulation, he speaks of a lack of elaboration of programs and projects, of coordination among diverse goals, and of provision of qualified staff and techniques. He mentions the absence of the definition of resources necessary to the financing of the plan and the inadequate quantification of sectorial and national goals due to the lack of reliable statistical information. Finally, he notes the failure to coordinate the annual budget with the multi-year plans. He sees the problems of execution as beginning in the absence of coordination between the various sectors of the public administration, the failure to develop an operative system of decentralization, the absence of any system of financial and fiscal control over execution, the difficulties imposed by the system of public accounting, and the absence of an adequate program of allocation for the liberation of resources. He ends by pointing to the failure to develop and use an integrated system of political economic tools in a mode compatible with the objective of the plans. The body of the article analyzes national planning experience in Brazil using these lists of errors as rough model of criticism. Several conclusions emerge from this analysis with regard to planning in Brazil and in developing countries, in general. Plans have generally been of little avail in Brazil because of the lack of a continuous, bureaucratized (in the Weberian sense) planning organization set in an instrumentally suitable administrative structure and based on thorough diagnoses of socio-economic conditions and problems. Plans have become the justification for planning. Planning has come to be conceived as a rational method of orienting the process of decisions through the establishment of a precise and quantified relation between means and ends. But this conception has led to a planning history rimmed with frustration, and failure, because of its rigidity in the face of flexible and changing reality. Rather, he suggests a conception of planning which understands it "as a rational process of formulating decisions about the policy, economy, and society whose only demand is that of managing the instrumentarium in a harmonious and integrated form in order to reach explicit, but not quantified ends". He calls this "planning without plans": the establishment of broad-scale tendencies through diagnosis whose implementation is carried out through an adjustable, coherent instrumentarium of political-economic tools. Administration according to a plan of multiple, integrated goals is a sound procedure if the nation's administrative machinery contains the technical development needed to control the multiple variables linked to any situation of socio-economic change. Brazil does not possess this level of refinement and any strategy of planning relevant to its problems must recognize this. The reforms which have been attempted fail to make this recognition as is true of the conception of planning informing the Brazilian experience. Therefore, unworkable plans, ill-diagnosed with little or no supportive instrumentarium or flexibility have been Brazil's legacy. This legacy seems likely to continue until the conception of planning comes to live in the reality of Brazil.
Resumo:
Novel alternating copolymers comprising biscalix[4]arene-p-phenylene ethynylene and m-phenylene ethynylene units (CALIX-m-PPE) were synthesized using the Sonogashira-Hagihara cross-coupling polymerization. Good isolated yields (60-80%) were achieved for the polymers that show M-n ranging from 1.4 x 10(4) to 5.1 x 10(4) gmol(-1) (gel permeation chromatography analysis), depending on specific polymerization conditions. The structural analysis of CALIX-m-PPE was performed by H-1, C-13, C-13-H-1 heteronuclear single quantum correlation (HSQC), C-13-H-1 heteronuclear multiple bond correlation (HMBC), correlation spectroscopy (COSY), and nuclear overhauser effect spectroscopy (NOESY) in addition to Fourier transform-Infrared spectroscopy and microanalysis allowing its full characterization. Depending on the reaction setup, variable amounts (16-45%) of diyne units were found in polymers although their photophysical properties are essentially the same. It is demonstrated that CALIX-m-PPE does not form ground-or excited-state interchain interactions owing to the highly crowded environment of the main-chain imparted by both calix[4]arene side units which behave as insulators inhibiting main-chain pi-pi staking. It was also found that the luminescent properties of CALIX-m-PPE are markedly different from those of an all-p-linked phenylene ethynylene copolymer (CALIX-p-PPE) previously reported. The unexpected appearance of a low-energy emission band at 426 nm, in addition to the locally excited-state emission (365 nm), together with a quite low fluorescence quantum yield (Phi = 0.02) and a double-exponential decay dynamics led to the formulation of an intramolecular exciplex as the new emissive species.
Resumo:
We generalize the Flory-Stockmayer theory of percolation to a model of associating (patchy) colloids, which consists of hard spherical particles, having on their surfaces f short-ranged-attractive sites of m different types. These sites can form bonds between particles and thus promote self-assembly. It is shown that the percolation threshold is given in terms of the eigenvalues of a m x m matrix, which describes the recursive relations for the number of bonded particles on the ith level of a cluster with no loops; percolation occurs when the largest of these eigenvalues equals unity. Expressions for the probability that a particle is not bonded to the giant cluster, for the average cluster size and the average size of a cluster to which a randomly chosen particle belongs, are also derived. Explicit results for these quantities are computed for the case f = 3 and m = 2. We show how these structural properties are related to the thermodynamics of the associating system by regarding bond formation as a (equilibrium) chemical reaction. This solution of the percolation problem, combined with Wertheim's thermodynamic first-order perturbation theory, allows the investigation of the interplay between phase behavior and cluster formation for general models of patchy colloids.
Resumo:
A maioria das infra-estruturas de transportes, nomeadamente os pavimentos rodoviários e aeroportuários, são constituídas por misturas betuminosas, o que permite um bom desempenho e uma adequada durabilidade, nas condições usuais de serviço. As misturas betuminosas são ainda amplamente utilizadas na construção de zonas de estacionamento de veículos, tendo-se verificado recentemente a sua aplicação também em infra-estruturas ferroviárias. Face à necessidade de melhorar o desempenho das vias-férreas, permitindo uma concepção mais durável de linhas de alta velocidade e uma redução dos custos da sua manutenção, tem-se vindo a desenvolver diversos estudos para promover a utilização de novos materiais, principalmente através da incorporação de misturas betuminosas. O presente trabalho tem como objectivo a caracterização do comportamento mecânico de misturas betuminosas a aplicar em infra-estruturas de transportes. Como metodologia para o estudo do comportamento mecânico das misturas betuminosas foram realizados em laboratório ensaios de cargas repetidas, nomeadamente, ensaios de flexão em quatro pontos para determinação da rigidez e da resistência à fadiga e ensaios de compressão triaxiais cíclicos para avaliação do comportamento à deformação permanente. A resistência à fadiga das misturas betuminosas em estudo foi avaliada através do ensaio de flexão em quatro pontos, com extensão controlada, e aplicação de um carregamento sinusoidal com diferentes frequências, de acordo com o procedimento de ensaio da norma europeia EN 12697-24 (2004 + A1: 2007). A resistência à deformação permanente das misturas betuminosas foi analisada através de ensaios de compressão triaxiais cíclicos, submetendo-as a uma tensão de confinamento estática pela aplicação parcial de vácuo e a uma pressão axial cíclica sob a forma rectangular, de acordo com a norma europeia EN 12697-25 (2004). O conhecimento destas propriedades mecânicas assume particular importância ao nível da formulação das misturas betuminosas, do dimensionamento de uma estrutura ou do estabelecimento de uma adequada solução para uma obra de reabilitação duma infra-estrutura de transportes. Para este estudo foi utilizado um modelo físico construído numa fossa no LNEC, com o propósito de serem testadas três substruturas ferroviárias não convencionais, utilizando sub-balastro betuminoso. A selecção das substruturas foi efectuada após uma análise de várias secções de estruturas já testadas e aplicadas noutros países, de forma a proporcionar comparações fiáveis entre elas. Os resultados obtidos mostraram que a mistura betuminosa AC20 base 50/70 (MB) aplicada na camada de sub-balastro é adequada para ser aplicada nas infra-estruturas de transportes pois apresenta um bom desempenho à fadiga e à deformação permanente. Através dos ensaios efectuados foi ainda possível entender a importante influência das características volumétricas, principalmente da porosidade para o bom comportamento da mistura betuminosa.
Resumo:
O principal objectivo desta tese é obter uma relação directa entre a composição dos gases liquefeitos de petróleo (GLP), propano, n-butano e isobutano, usados como aerossóis propulsores numa lata de poliuretano de um componente, com as propriedades das espumas produzidas por spray. As espumas obtidas, terão de ter como requisito principal, um bom desempenho a temperaturas baixas, -10ºC, sendo por isso designadas por espumas de Inverno. Uma espuma é considerada como tendo um bom desempenho se não apresentar a -10/-10ºC (temperatura lata/ spray) glass bubbles, base holes e cell collapse. As espumas deverão ainda ter densidades do spray no molde a +23/+23ºC abaixo dos 30 g/L, um rendimento superior a 30 L, boa estabilidade dimensional e um caudal de espuma a +5/+5ºC superior a 5 g/s. Os ensaios experimentais foram realizados a +23/+23ºC, +5/+5ºC e a -10/-10ºC. A cada temperatura, as espumas desenvolvidas, foram submetidas a testes que permitiram determinar a sua qualidade. Testes esses que incluem os designados por Quick Tests (QT): o spray no papel e no molde das espumas nas referidas temperaturas. As amostras do papel e no molde são especialmente analisadas, quanto, às glass bubbles, cell collapse, base holes, cell structur e, cutting shrinkage, para além de outras propriedades. Os QT também incluem a análise da densidade no molde (ODM) e o estudo do caudal de espumas. Além dos QT foram realizados os testes da estabilidade dimensional das espumas, testes físicos de compressão e adesão, testes de expansão das espumas após spray e do rendimento por lata de espuma. Em todos os ensaios foi utilizado um tubo adaptador colocado na válvula da lata como método de spray e ainda mantida constante a proporção das matérias-primas (excepto os gases, em estudo). As experiências iniciaram-se com o estudo de GLPs presentes no mercado de aerossóis. Estes resultaram que o GLP: propano/ n-butano/ isobutano: (30/ 0/ 70 w/w%), produz as melhores espumas de inverno a -10/-10ºC, reduzindo desta forma as glass bubbles, base holes e o cell collapse produzido pelos restantes GLP usados como aerossóis nas latas de poliuretano. Testes posteriores tiveram como objectivo estudar a influência directa de cada gás, propano, n-butano e isobutano nas espumas. Para tal, foram usadas duas referências do estudo com GLP comercializáveis, 7396 (30 /0 /70 w/w %) e 7442 (0/ 0/ 100 w/w %). Com estes resultados concluí-se que o n-butano produz más propriedades nas espumas a -10/- 10ºC, formando grandes quantidades de glass bubbles, base holes e cell collapse. Contudo, o uso de propano reduz essas glass bubbles, mas em contrapartida, forma cell collapse.Isobutano, porém diminui o cell collapse mas não as glass bubbles. Dos resultados experimentais podemos constatar que o caudal a +5/+5ºC e densidade das espumas a +23/+23ºC, são influenciados pela composição do GLP. O propano e n-butano aumentam o caudal de espuma das latas e a sua densidade, ao contrário com o que acontece com o isobutano. Todavia, pelos resultados obtidos, o isobutano proporciona os melhores rendimentos de espumas por lata. Podemos concluir que os GLPs que contivessem cerca de 30 w/w % de propano (bons caudais a +5/+5ºC e menos glass bubbles a -10/-10ºC), e cerca 70 w/w % de isobutano (bons rendimentos de espumas, bem como menos cell collapse a -10/-10ºC) produziam as melhores espumas. Também foram desenvolvidos testes sobre a influência da quantidade de gás GLP presente numa lata. A análise do volume de GLP usado, foi realizada com base na melhor espuma obtida nos estudos anteriores, 7396, com um GLP (30 / 0/ 70 w/w%), e foram feitas alterações ao seu volume gás GLP presente no pré-polímero. O estudo concluiu, que o aumento do volume pode diminuir a densidade das espumas, e o seu decréscimo, um aumento da densidade. Também indico u que um mau ajuste do volume poderá causar más propriedades nas espumas. A análise económica, concluiu que o custo das espumas com mais GLP nas suas formulações, reduz-se em cerca de 3%, a quando de um aumento do volume de GLP no pré-polímero de cerca de 8 %. Esta diminuição de custos deveu-se ao facto, de um aumento de volume de gás, implicar uma diminuição na quantidade das restantes matérias-primas, com custos superiores, já que o volume útil total da lata terá de ser sempre mantido nos 750 mL. Com o objectivo de melhorar a qualidade da espuma 7396 (30/0/70 w/w %) obtida nos ensaios anteriores adicionou-se à formulação 7396 o HFC-152a (1,1-di fluoroetano). Os resultados demonstram que se formam espumas com más propriedades, especialmente a -10/-10ºC, contudo proporcionou excelentes shaking rate da lata. Através de uma pequena análise de custos não é aconselhável o seu uso pelos resultados obtidos, não proporcionando um balanço custo/benefício favorável. As três melhores espumas obtidas de todos os estudos foram comparadas com uma espuma de inverno presente no mercado. 7396 e 7638 com um volume de 27 % no prépolímero e uma composição de GLP (30/ 0 / 70 w/w%) e (13,7/ 0/ 86,3 w/w%), respectivamente, e 7690, com 37 % de volume no pré-polímero e GLP (30/ 0 / 70 w/w%), apresentaram em geral melhores resultados, comparando com a espuma benchmark . Contudo, os seus shaking rate a -10/-10ºC, de cada espuma, apresentaram valores bastante inferiores à composição benchmarking.
Resumo:
A adequada previsão do comportamento diferido do betão, designadamente da retracção, é essencial no projecto de uma obra de grandes dimensões, permitindo conceber, dimensionar e adoptar as disposições construtivas para um comportamento estrutural que satisfaça os requisitos de segurança, utilização e durabilidade. O actual momento é marcado por uma transição em termos da regulamentação de estruturas, com a eminente substituição da regulamentação nacional por regulamentação europeia. No caso das estruturas de betão, o Regulamento de Estruturas de Betão Armado e Pré-Esforçado (REBAP), em vigor desde 1983, será substituído pelo Eurocódigo 2. Paralelamente, a Federation International du Betón publicou o Model Code 2010 (MC2010), um documento que certamente terá forte influência na evolução da regulamentação das estruturas de betão. Neste contexto, o presente trabalho tem como objectivo estabelecer uma comparação entre os diferentes modelos de previsão da retracção incluídos nos documentos normativos referidos, identificando as principais diferenças e semelhanças entre eles e quantificando a influência dos diferentes factores considerados na sua formulação, de forma a avaliar o impacto que a introdução destes modelos de previsão irá ter no projecto de estruturas de betão. Com o propósito de aferir a forma como estes modelos reflectem a realidade do fenómeno em estudo, procedeu-se à aplicação destes modelos de previsão ao betão de duas obras cujo comportamento estrutural é observado pelo LNEC, concretamente a ponte Miguel Torga, sobre o rio Douro, na Régua, e a ponte sobre o rio Angueira, no distrito de Bragança. Em ambas as obras tinha sido efectuada a caracterização in situ da retracção, tendo-se comparado os valores experimentais assim obtidos com os valores provenientes da aplicação dos modelos de previsão considerados neste trabalho. Finalmente são apresentadas algumas conclusões obtidas com o trabalho desenvolvido nesta dissertação, bem como algumas sugestões para desenvolvimentos futuros.
Resumo:
In the aftermath of a large-scale disaster, agents' decisions derive from self-interested (e.g. survival), common-good (e.g. victims' rescue) and teamwork (e.g. fire extinction) motivations. However, current decision-theoretic models are either purely individual or purely collective and find it difficult to deal with motivational attitudes; on the other hand, mental-state based models find it difficult to deal with uncertainty. We propose a hybrid, CvI-JI, approach that combines: i) collective 'versus' individual (CvI) decisions, founded on the Markov decision process (MDP) quantitative evaluation of joint-actions, and ii)joint-intentions (JI) formulation of teamwork, founded on the belief-desire-intention (BDI) architecture of general mental-state based reasoning. The CvI-JI evaluation explores the performance's improvement
Resumo:
A presente dissertação tem como objectivo principal aprofundar os conhecimentos sobre a incorporação de fibras em misturas betuminosas, em particular, sobre as misturas do tipo Stone Mastic Asphalt (SMA). Para este tipo de mistura betuminosa existe uma norma de produto europeia, a EN 13108-5:2006. Os ensaios tipo iniciais para efeitos de certificação CE das misturas estão contemplados na norma portuguesa NP EN 13108-20:2008, onde também são estipuladas as condições de ensaio. São descritos diversos tipos de fibras passíveis de serem utilizadas no fabrico da mistura betuminosa do tipo SMA, dando especial enfoque às misturas fabricadas com fibras celulósicas, uma vez que este tipo específico de mistura é frequentemente aplicado em camada de desgaste noutros países, com reconhecidas vantagens em termos de durabilidade e desempenho do pavimento. As misturas do tipo SMA são descritas, analisando-se métodos de formulação e métodos para a sua caracterização. Posteriormente, são apresentados e discutidos os trabalhos experimentais levados a cabo para um caso concreto. Os ensaios realizados permitiram caracterizar a mistura quanto à sensibilidade à água, ao seu módulo de rigidez e resistência à deformação permanente. Conclui-se que a mistura do tipo SMA com a incorporação de fibras analisada, apresenta bom comportamento à deformação permanente e boa resistência á acção da água, comparativamente às misturas betuminosas tradicionais aplicadas em camada de desgaste.
Resumo:
In this paper we present a methodology which enables the graphical representation, in a bi-dimensional Euclidean space, of atmospheric pollutants emissions in European countries. This approach relies on the use of Multidimensional Unfolding (MDU), an exploratory multivariate data analysis technique. This technique illustrates both the relationships between the emitted gases and the gases and their geographical origins. The main contribution of this work concerns the evaluation of MDU solutions. We use simulated data to define thresholds for the model fitting measures, allowing the MDU output quality evaluation. The quality assessment of the model adjustment is thus carried out as a step before interpretation of the gas types and geographical origins results. The MDU maps analysis generates useful insights, with an immediate substantive result and enables the formulation of hypotheses for further analysis and modeling.
Resumo:
This paper proposes a novel framework for modelling the Value for the Customer, the so-called the Conceptual Model for Decomposing Value for the Customer (CMDVC). This conceptual model is first validated through an exploratory case study where the authors validate both the proposed constructs of the model and their relations. In a second step the authors propose a mathematical formulation for the CMDVC as well as a computational method. This has enabled the final quantitative discussion of how the CMDVC can be applied and used in the enterprise environment, and the final validation by the people in the enterprise. Along this research, we were able to confirm that the results of this novel quantitative approach to model the Value for the Customer is consistent with the company's empirical experience. The paper further discusses the merits and limitations of this approach, proposing that the model is likely to bring value to support not only the contract preparation at an Ex-Ante Negotiation Phase, as demonstrated, but also along the actual negotiation process, as finally confirmed by an enterprise testimonial.
Resumo:
Value has been defined in different theoretical contexts as need, desire, interest, standard /criteria, beliefs, attitudes, and preferences. The creation of value is key to any business, and any business activity is about exchanging some tangible and/or intangible good or service and having its value accepted and rewarded by customers or clients, either inside the enterprise or collaborative network or outside. “Perhaps surprising then is that firms often do not know how to define value, or how to measure it” (Anderson and Narus, 1998 cited by [1]). Woodruff echoed that we need “richer customer value theory” for providing an “important tool for locking onto the critical things that managers need to know”. In addition, he emphasized, “we need customer value theory that delves deeply into customer’s world of product use in their situations” [2]. In this sense, we proposed and validated a novel “Conceptual Model for Decomposing the Value for the Customer”. To this end, we were aware that time has a direct impact on customer perceived value, and the suppliers’ and customers’ perceptions change from the pre-purchase to the post-purchase phases, causing some uncertainty and doubts.We wanted to break down value into all its components, as well as every built and used assets (both endogenous and/or exogenous perspectives). This component analysis was then transposed into a mathematical formulation using the Fuzzy Analytic Hierarchy Process (AHP), so that the uncertainty and vagueness of value perceptions could be embedded in this model that relates used and built assets in the tangible and intangible deliverable exchange among the involved parties, with their actual value perceptions.
Resumo:
The best places to locate the Gas Supply Units (GSUs) on a natural gas systems and their optimal allocation to loads are the key factors to organize an efficient upstream gas infrastructure. The number of GSUs and their optimal location in a gas network is a decision problem that can be formulated as a linear programming problem. Our emphasis is on the formulation and use of a suitable location model, reflecting real-world operations and constraints of a natural gas system. This paper presents a heuristic model, based on lagrangean approach, developed for finding the optimal GSUs location on a natural gas network, minimizing expenses and maximizing throughput and security of supply.The location model is applied to the Iberian high pressure natural gas network, a system modelised with 65 demand nodes. These nodes are linked by physical and virtual pipelines – road trucks with gas in liquefied form. The location model result shows the best places to locate, with the optimal demand allocation and the most economical gas transport mode: by pipeline or by road truck.
Resumo:
Natural gas industry has been confronted with big challenges: great growth in demand, investments on new GSUs – gas supply units, and efficient technical system management. The right number of GSUs, their best location on networks and the optimal allocation to loads is a decision problem that can be formulated as a combinatorial programming problem, with the objective of minimizing system expenses. Our emphasis is on the formulation, interpretation and development of a solution algorithm that will analyze the trade-off between infrastructure investment expenditure and operating system costs. The location model was applied to a 12 node natural gas network, and its effectiveness was tested in five different operating scenarios.
Resumo:
In this paper we study the optimal natural gas commitment for a known demand scenario. This study implies the best location of GSUs to supply all demands and the optimal allocation from sources to gas loads, through an appropriate transportation mode, in order to minimize total system costs. Our emphasis is on the formulation and use of a suitable optimization model, reflecting real-world operations and the constraints of natural gas systems. The mathematical model is based on a Lagrangean heuristic, using the Lagrangean relaxation, an efficient approach to solve the problem. Computational results are presented for Iberian and American natural gas systems, geographically organized in 65 and 88 load nodes, respectively. The location model results, supported by the computational application GasView, show the optimal location and allocation solution, system total costs and suggest a suitable gas transportation mode, presented in both numerical and graphic supports.