945 resultados para Ambient gas
Resumo:
Actually, the term innovation seems to be one of the most used in any kind of business practices. However, in order to get value from it, companies need to define a systematic and structured way to manage innovation. This process can be difficult and very risky since it is associated with the development of firm´s capabilities which involves human and technical challenges according to the context of a firm. Additionally, it seems not to exist a magic formula to manage innovation and what may work in a company may not work in another, even though in the same type of industry. In this sense, the purpose of this research is to identify how the oil and gas companies can manage innovation and what are the main elements, their interrelations and structure, required for managing innovation effectively in this critical sector for the world economy. The study follows a holistic single case study in a National Oil Company (NOC) of a developing country to explore how innovation performs in the industry, what are the main elements regarding innovation management and their interactions according to the nature of the industry. Contributory literature and qualitative data from the case study company (with the use of non-standardized interviews) is collected and analyzed. The research confirms the relevance and importance of the definition and implementation of an innovation framework in order to ensure the generation of value and organize as well as guide the efforts in innovation done by a firm. In this way based on the theoretical background, research´s findings, and in the company´s innovation environment and conditions, a framework for managing innovation at the case study company is suggested. This study is one of the few, if not only one, that has reviewed the way as oil and gas companies manage innovation and its practical implementation in a company from a developing country. Both researchers and practitioners will get a photograph of understanding innovation management in the oil and gas industry and its growing necessity in the business world. Some issues have been highlighted, so that future study can be focused in those directions. In fact, even though research on innovation management has significantly grown, there are still many issues that need to be addressed to get insight about managing innovation in various contexts and industries. Studies are mostly performed in the context of large firms and in developed countries, so then research in the context of developing countries is still almost an untouched area, especially in the oil and gas industry. Finally, from the research it seems crucial to explore the effect of some innovation-related variables such as: open innovation in third world economies and in state-own companies; the impact of mergers and acquisitions in innovation performance in oil and gas companies; value measurement in the first stages of the innovation process; and, development of innovation capabilities in companies from developing nations.
Resumo:
Uusiutuvan sähköntuotannon osuuden kasvaessa kasvaa tarve tasata sähköntuotannon ja kulutuksen vaihteluita varastoimalla sähköä. Power to Gas (PtG) - sähköenergiasta luonnonkaasua tarjoaa yhden mahdollisuuden varastoida sähköä. Sähköä käytetään veden elektrolyysiin, jossa syntynyt vety käytetään metanoinissa yhdessä hiilidioksidin kanssa muodostamaan korvaavaa luonnonkaasua. Näin syntynyttä korvaava luonnonkaasua sähköstä kutsutaan e-SNG-kaasuksi. Tässä työssä tutkitaan PtG-laitoksen investointi, käyttö- ja kunnossapitokuluja. Työssä luodaan laskentamalli, jolla lasketaan PtG-laitoksen neljälle käyttötapaukselle kannattavuuslaskelma. Käyttötapauksille lasketaan myös herkkyystarkasteluja. Kannattavuuslaskelmien perusteella päätellään PtG-laitoksen liiketoimintamahdollisuudet Suomessa. Työssä laskettujen kannattavuuslaskelmien perusteella PtG-laitoksen perustapausten liiketoimintamahdollisuudet ovat huonot. Laskettujen herkkyystarkastelujen perusteella havaittiin, että investointikulut, laitoksen ajoaika ja lisätulot hapesta ja lämmöstä ovat kannattavuuden kannalta kriittisimmät menestystekijät.
Resumo:
The steel industry produces, besides steel, also solid mineral by-products or slags, while it emits large quantities of carbon dioxide (CO2). Slags consist of various silicates and oxides which are formed in chemical reactions between the iron ore and the fluxing agents during the high temperature processing at the steel plant. Currently, these materials are recycled in the ironmaking processes, used as aggregates in construction, or landfilled as waste. The utilization rate of the steel slags can be increased by selectively extracting components from the mineral matrix. As an example, aqueous solutions of ammonium salts such as ammonium acetate, chloride and nitrate extract calcium quite selectively already at ambient temperature and pressure conditions. After the residual solids have been separated from the solution, calcium carbonate can be precipitated by feeding a CO2 flow through the solution. Precipitated calcium carbonate (PCC) is used in different applications as a filler material. Its largest consumer is the papermaking industry, which utilizes PCC because it enhances the optical properties of paper at a relatively low cost. Traditionally, PCC is manufactured from limestone, which is first calcined to calcium oxide, then slaked with water to calcium hydroxide and finally carbonated to PCC. This process emits large amounts of CO2, mainly because of the energy-intensive calcination step. This thesis presents research work on the scale-up of the above-mentioned ammonium salt based calcium extraction and carbonation method, named Slag2PCC. Extending the scope of the earlier studies, it is now shown that the parameters which mainly affect the calcium utilization efficiency are the solid-to-liquid ratio of steel slag and the ammonium salt solvent solution during extraction, the mean diameter of the slag particles, and the slag composition, especially the fractions of total calcium, silicon, vanadium and iron as well as the fraction of free calcium oxide. Regarding extraction kinetics, slag particle size, solid-to-liquid ratio and molar concentration of the solvent solution have the largest effect on the reaction rate. Solvent solution concentrations above 1 mol/L NH4Cl cause leaching of other elements besides calcium. Some of these such as iron and manganese result in solution coloring, which can be disadvantageous for the quality of the PCC product. Based on chemical composition analysis of the produced PCC samples, however, the product quality is mainly similar as in commercial products. Increasing the novelty of the work, other important parameters related to assessment of the PCC quality, such as particle size distribution and crystal morphology are studied as well. As in traditional PCC precipitation process, the ratio of calcium and carbonate ions controls the particle shape; a higher value for [Ca2+]/[CO32-] prefers precipitation of calcite polymorph, while vaterite forms when carbon species are present in excess. The third main polymorph, aragonite, is only formed at elevated temperatures, above 40-50 °C. In general, longer precipitation times cause transformation of vaterite to calcite or aragonite, but also result in particle agglomeration. The chemical equilibrium of ammonium and calcium ions and dissolved ammonia controlling the solution pH affects the particle sizes, too. Initial pH of 12-13 during the carbonation favors nonagglomerated particles with a diameter of 1 μm and smaller, while pH values of 9-10 generate more agglomerates of 10-20 μm. As a part of the research work, these findings are implemented in demonstrationscale experimental process setups. For the first time, the Slag2PCC technology is tested in scale of ~70 liters instead of laboratory scale only. Additionally, design of a setup of several hundreds of liters is discussed. For these purposes various process units such as inclined settlers and filters for solids separation, pumps and stirrers for material transfer and mixing as well as gas feeding equipment are dimensioned and developed. Overall emissions reduction of the current industrial processes and good product quality as the main targets, based on the performed partial life cycle assessment (LCA), it is most beneficial to utilize low concentration ammonium salt solutions for the Slag2PCC process. In this manner the post-treatment of the products does not require extensive use of washing and drying equipment, otherwise increasing the CO2 emissions of the process. The low solvent concentration Slag2PCC process causes negative CO2 emissions; thus, it can be seen as a carbon capture and utilization (CCU) method, which actually reduces the anthropogenic CO2 emissions compared to the alternative of not using the technology. Even if the amount of steel slag is too small for any substantial mitigation of global warming, the process can have both financial and environmental significance for individual steel manufacturers as a means to reduce the amounts of emitted CO2 and landfilled steel slag. Alternatively, it is possible to introduce the carbon dioxide directly into the mixture of steel slag and ammonium salt solution. The process would generate a 60-75% pure calcium carbonate mixture, the remaining 25-40% consisting of the residual steel slag. This calcium-rich material could be re-used in ironmaking as a fluxing agent instead of natural limestone. Even though this process option would require less process equipment compared to the Slag2PCC process, it still needs further studies regarding the practical usefulness of the products. Nevertheless, compared to several other CO2 emission reduction methods studied around the world, the within this thesis developed and studied processes have the advantage of existing markets for the produced materials, thus giving also a financial incentive for applying the technology in practice.
Resumo:
The aim of this thesis is to study whether the use of biomethane as a transportation fuel is reasonable from climate change perspective. In order to identify potentials and challenges for the reduction of greenhouse gas (GHG) emissions, this dissertation focuses on GHG emission comparisons, on feasibility studies and on the effects of various calculation methodologies. The GHG emissions calculations are carried out by using life cycle assessment (LCA) methodologies. The aim of these LCA studies is to figure out the key parameters affecting the GHG emission saving potential of biomethane production and use and to give recommendations related to methodological choices. The feasibility studies are also carried out from the life cycle perspective by dividing the biomethane production chain for various operators along the life cycle of biomethane in order to recognize economic bottlenecks. Biomethane use in the transportation sector leads to GHG emission reductions compared to fossil transportation fuels in most cases. In addition, electricity and heat production from landfill gas, biogas or biomethane leads to GHG reductions as well. Electricity production for electric vehicles is also a potential route to direct biogas or biomethane energy to transportation sector. However, various factors along the life cycle of biomethane affect the GHG reduction potentials. Furthermore, the methodological selections have significant effects on the results. From economic perspective, there are factors related to different operators along the life cycle of biomethane, which are not encouraging biomethane use in the transportation sector. To minimize the greenhouse gas emissions from the life cycle of biomethane, waste feedstock should be preferred. In addition, energy consumption, methane leakages, digestate utilization and the current use of feedstock or biogas are also key factors. To increase the use of biomethane in the transportation sector, political steering is needed to improve the feasibility for the operators. From methodological perspective, it is important to recognize the aim of the life cycle assessment study. The life cycle assessment studies can be divided into two categories: 1.) To produce average GHG information of biomethane to evaluate the acceptability of biomethane use compared to fossil transportation fuels. 2.) To produce GHG information of biomethane related to actual decision-making situations. This helps to figure out the actual GHG emission changes in cases when feedstock, biogas or biomethane are already in other use. For example directing biogas from electricity production to transportation use does not necessarily lead to additional GHG emission reductions. The use of biomethane seems to have a lot of potential for the reduction of greenhouse gas emissions as a transportation fuel. However, there are various aspects related to production processes, to the current use of feedstock or biogas and to the feasibility that have to be taken into account.
Resumo:
Innovative gas cooled reactors, such as the pebble bed reactor (PBR) and the gas cooled fast reactor (GFR) offer higher efficiency and new application areas for nuclear energy. Numerical methods were applied and developed to analyse the specific features of these reactor types with fully three dimensional calculation models. In the first part of this thesis, discrete element method (DEM) was used for a physically realistic modelling of the packing of fuel pebbles in PBR geometries and methods were developed for utilising the DEM results in subsequent reactor physics and thermal-hydraulics calculations. In the second part, the flow and heat transfer for a single gas cooled fuel rod of a GFR were investigated with computational fluid dynamics (CFD) methods. An in-house DEM implementation was validated and used for packing simulations, in which the effect of several parameters on the resulting average packing density was investigated. The restitution coefficient was found out to have the most significant effect. The results can be utilised in further work to obtain a pebble bed with a specific packing density. The packing structures of selected pebble beds were also analysed in detail and local variations in the packing density were observed, which should be taken into account especially in the reactor core thermal-hydraulic analyses. Two open source DEM codes were used to produce stochastic pebble bed configurations to add realism and improve the accuracy of criticality calculations performed with the Monte Carlo reactor physics code Serpent. Russian ASTRA criticality experiments were calculated. Pebble beds corresponding to the experimental specifications within measurement uncertainties were produced in DEM simulations and successfully exported into the subsequent reactor physics analysis. With the developed approach, two typical issues in Monte Carlo reactor physics calculations of pebble bed geometries were avoided. A novel method was developed and implemented as a MATLAB code to calculate porosities in the cells of a CFD calculation mesh constructed over a pebble bed obtained from DEM simulations. The code was further developed to distribute power and temperature data accurately between discrete based reactor physics and continuum based thermal-hydraulics models to enable coupled reactor core calculations. The developed method was also found useful for analysing sphere packings in general. CFD calculations were performed to investigate the pressure losses and heat transfer in three dimensional air cooled smooth and rib roughened rod geometries, housed inside a hexagonal flow channel representing a sub-channel of a single fuel rod of a GFR. The CFD geometry represented the test section of the L-STAR experimental facility at Karlsruhe Institute of Technology and the calculation results were compared to the corresponding experimental results. Knowledge was gained of the adequacy of various turbulence models and of the modelling requirements and issues related to the specific application. The obtained pressure loss results were in a relatively good agreement with the experimental data. Heat transfer in the smooth rod geometry was somewhat under predicted, which can partly be explained by unaccounted heat losses and uncertainties. In the rib roughened geometry heat transfer was severely under predicted by the used realisable k − epsilon turbulence model. An additional calculation with a v2 − f turbulence model showed significant improvement in the heat transfer results, which is most likely due to the better performance of the model in separated flow problems. Further investigations are suggested before using CFD to make conclusions of the heat transfer performance of rib roughened GFR fuel rod geometries. It is suggested that the viewpoints of numerical modelling are included in the planning of experiments to ease the challenging model construction and simulations and to avoid introducing additional sources of uncertainties. To facilitate the use of advanced calculation approaches, multi-physical aspects in experiments should also be considered and documented in a reasonable detail.
Resumo:
Gasification of biomass is an efficient method process to produce liquid fuels, heat and electricity. It is interesting especially for the Nordic countries, where raw material for the processes is readily available. The thermal reactions of light hydrocarbons are a major challenge for industrial applications. At elevated temperatures, light hydrocarbons react spontaneously to form higher molecular weight compounds. In this thesis, this phenomenon was studied by literature survey, experimental work and modeling effort. The literature survey revealed that the change in tar composition is likely caused by the kinetic entropy. The role of the surface material is deemed to be an important factor in the reactivity of the system. The experimental results were in accordance with previous publications on the subject. The novelty of the experimental work lies in the used time interval for measurements combined with an industrially relevant temperature interval. The aspects which are covered in the modeling include screening of possible numerical approaches, testing of optimization methods and kinetic modelling. No significant numerical issues were observed, so the used calculation routines are adequate for the task. Evolutionary algorithms gave a better performance combined with better fit than the conventional iterative methods such as Simplex and Levenberg-Marquardt methods. Three models were fitted on experimental data. The LLNL model was used as a reference model to which two other models were compared. A compact model which included all the observed species was developed. The parameter estimation performed on that model gave slightly impaired fit to experimental data than LLNL model, but the difference was barely significant. The third tested model concentrated on the decomposition of hydrocarbons and included a theoretical description of the formation of carbon layer on the reactor walls. The fit to experimental data was extremely good. Based on the simulation results and literature findings, it is likely that the surface coverage of carbonaceous deposits is a major factor in thermal reactions.
Resumo:
In the present study we evaluated the precision of the ELISA method to quantify caffeine in human plasma and compared the results with those obtained by gas chromatography. A total of 58 samples were analyzed by gas chromatography using a nitrogen-phosphorus detector and routine techniques. For the ELISA test, the samples were diluted to obtain a concentration corresponding to 50% of the absorbance of the standard curve. To determine whether the proximity between the I50 of the standard curve and that of the sample would bring about a more precise result, the samples were divided into three blocks according to the criterion of difference, in modulus, of the I50 of the standard curve and of the I50 of the sample. The samples were classified into three groups. The first was composed of 20 samples with I50 up to 1.5 ng/ml, the second consisted of 21 samples with I50 ranging from 1.51 to 3 ng/ml, and the third of 17 samples with I50 ranging from 3.01 to 13 ng/ml. The determination coefficient (R² = 0.999) showed that the data obtained by gas chromatography represented a reliable basis. The results obtained by ELISA were also reliable, with an estimated Pearson correlation coefficient of 0.82 between the two methods. This coefficient for the different groups (0.88, 0.79 and 0.49 for groups 1, 2 and 3, respectively) showed greater reliability for the test with dilutions closer to I50.
Resumo:
Kartta kuuluu A. E. Nordenskiöldin kokoelmaan
Resumo:
Cement industry significantly associated with high greenhouse gas (GHG) emissions. Considering the environmental impact, particularly global warming potential, it is important to reduce these emissions to air. The aim of the study is to investigate the mitigation possibility of GHG emissions in Ethiopian cement industry. Life cycle assessment (LCA) method used to identify and quantify GHG emissions during one ton of ordinary portland cement (OPC) production. Three mitigation scenarios: alternative fuel use, clinker substitution and thermal energy efficiency were applied on a representative gate-to-gate flow model developed with GaBi 6 software. The results of the study indicate that clinker substitution and alternative fuel use play a great role for GHG emissions mitigation with affordable cost. Applying most energy efficient kiln technology, which in turn reduces the amount of thermal energy use, has the least GHG emissions reduction intensity and high implementation cost comparing to the other scenarios. It was found that the cumulative GHG emissions mitigation potential along with other selected mitigation scenarios can be at least 48.9% per ton of cement production.
Resumo:
The Large Hadron Collider (LHC) in The European Organization for Nuclear Research (CERN) will have a Long Shutdown sometime during 2017 or 2018. During this time there will be maintenance and a possibility to install new detectors. After the shutdown the LHC will have a higher luminosity. A promising new type of detector for this high luminosity phase is a Triple-GEM detector. During the shutdown these detectors will be installed at the Compact Muon Solenoid (CMS) experiment. The Triple-GEM detectors are now being developed at CERN and alongside also a readout ASIC chip for the detector. In this thesis a simulation model was developed for the ASICs analog front end. The model will help to carry out more extensive simulations and also simulate the whole chip before the whole design is finished. The proper functioning of the model was tested with simulations, which are also presented in the thesis.
Resumo:
Human activity recognition in everyday environments is a critical, but challenging task in Ambient Intelligence applications to achieve proper Ambient Assisted Living, and key challenges still remain to be dealt with to realize robust methods. One of the major limitations of the Ambient Intelligence systems today is the lack of semantic models of those activities on the environment, so that the system can recognize the speci c activity being performed by the user(s) and act accordingly. In this context, this thesis addresses the general problem of knowledge representation in Smart Spaces. The main objective is to develop knowledge-based models, equipped with semantics to learn, infer and monitor human behaviours in Smart Spaces. Moreover, it is easy to recognize that some aspects of this problem have a high degree of uncertainty, and therefore, the developed models must be equipped with mechanisms to manage this type of information. A fuzzy ontology and a semantic hybrid system are presented to allow modelling and recognition of a set of complex real-life scenarios where vagueness and uncertainty are inherent to the human nature of the users that perform it. The handling of uncertain, incomplete and vague data (i.e., missing sensor readings and activity execution variations, since human behaviour is non-deterministic) is approached for the rst time through a fuzzy ontology validated on real-time settings within a hybrid data-driven and knowledgebased architecture. The semantics of activities, sub-activities and real-time object interaction are taken into consideration. The proposed framework consists of two main modules: the low-level sub-activity recognizer and the high-level activity recognizer. The rst module detects sub-activities (i.e., actions or basic activities) that take input data directly from a depth sensor (Kinect). The main contribution of this thesis tackles the second component of the hybrid system, which lays on top of the previous one, in a superior level of abstraction, and acquires the input data from the rst module's output, and executes ontological inference to provide users, activities and their in uence in the environment, with semantics. This component is thus knowledge-based, and a fuzzy ontology was designed to model the high-level activities. Since activity recognition requires context-awareness and the ability to discriminate among activities in di erent environments, the semantic framework allows for modelling common-sense knowledge in the form of a rule-based system that supports expressions close to natural language in the form of fuzzy linguistic labels. The framework advantages have been evaluated with a challenging and new public dataset, CAD-120, achieving an accuracy of 90.1% and 91.1% respectively for low and high-level activities. This entails an improvement over both, entirely data-driven approaches, and merely ontology-based approaches. As an added value, for the system to be su ciently simple and exible to be managed by non-expert users, and thus, facilitate the transfer of research to industry, a development framework composed by a programming toolbox, a hybrid crisp and fuzzy architecture, and graphical models to represent and con gure human behaviour in Smart Spaces, were developed in order to provide the framework with more usability in the nal application. As a result, human behaviour recognition can help assisting people with special needs such as in healthcare, independent elderly living, in remote rehabilitation monitoring, industrial process guideline control, and many other cases. This thesis shows use cases in these areas.
Resumo:
The aim of the present study was to examine the feasibility of DNA microarray technology in an attempt to construct an evaluation system for determining gas toxicity using high-pressure conditions, as it is well known that pressure increases the concentration of a gas. As a first step, we used yeast (Saccharomyces cerevisiae) as the indicator organism and analyzed the mRNA expression profiles after exposure of yeast cells to nitrogen gas. Nitrogen gas was selected as a negative control since this gas has low toxicity. Yeast DNA microarray analysis revealed induction of genes whose products were localized to the membranes, and of genes that are involved in or contribute to energy production. Furthermore, we found that nitrogen gas significantly affected the transport system in the cells. Interestingly, nitrogen gas also resulted in induction of cold-shock responsive genes. These results suggest the possibility of applying yeast DNA microarray to gas bioassays up to 40 MPa. We therefore think that "bioassays" are ideal for use in environmental control and protection studies.
Resumo:
Finland, other Nordic countries and European Union aim to decarbonize their energy production by 2050. Decarbonization requires large scale implementation of non-emission energy sources, i.e. renewable energy and nuclear power. Stochastic renewable energy sources present a challenge to balance the supply and demand for energy. Energy storages, non-emissions fuels in mobility and industrial processes are required whenever electrification is not possible. Neo-Carbon project studies the decarbonizing the energy production and the role of synthetic gas in it. This thesis studies the industrial processes in steel production, oil refining, cement manufacturing and glass manufacturing, where natural gas is already used or fuel switch to SNG is possible. The technical potential for fuel switching is assessed, and economic potential is necessary after this. All studied processes have potential for fuel switching, but total decarbonization of steel production, oil refining requires implementation of other zero-emission technologies.
Resumo:
The aim of the present study was to determine the ventilation/perfusion ratio that contributes to hypoxemia in pulmonary embolism by analyzing blood gases and volumetric capnography in a model of experimental acute pulmonary embolism. Pulmonary embolization with autologous blood clots was induced in seven pigs weighing 24.00 ± 0.6 kg, anesthetized and mechanically ventilated. Significant changes occurred from baseline to 20 min after embolization, such as reduction in oxygen partial pressures in arterial blood (from 87.71 ± 8.64 to 39.14 ± 6.77 mmHg) and alveolar air (from 92.97 ± 2.14 to 63.91 ± 8.27 mmHg). The effective alveolar ventilation exhibited a significant reduction (from 199.62 ± 42.01 to 84.34 ± 44.13) consistent with the fall in alveolar gas volume that effectively participated in gas exchange. The relation between the alveolar ventilation that effectively participated in gas exchange and cardiac output (V Aeff/Q ratio) also presented a significant reduction after embolization (from 0.96 ± 0.34 to 0.33 ± 0.17 fraction). The carbon dioxide partial pressure increased significantly in arterial blood (from 37.51 ± 1.71 to 60.76 ± 6.62 mmHg), but decreased significantly in exhaled air at the end of the respiratory cycle (from 35.57 ± 1.22 to 23.15 ± 8.24 mmHg). Exhaled air at the end of the respiratory cycle returned to baseline values 40 min after embolism. The arterial to alveolar carbon dioxide gradient increased significantly (from 1.94 ± 1.36 to 37.61 ± 12.79 mmHg), as also did the calculated alveolar (from 56.38 ± 22.47 to 178.09 ± 37.46 mL) and physiological (from 0.37 ± 0.05 to 0.75 ± 0.10 fraction) dead spaces. Based on our data, we conclude that the severe arterial hypoxemia observed in this experimental model may be attributed to the reduction of the V Aeff/Q ratio. We were also able to demonstrate that V Aeff/Q progressively improves after embolization, a fact attributed to the alveolar ventilation redistribution induced by hypocapnic bronchoconstriction.
Resumo:
Experimental models of sepsis-induced pulmonary alterations are important for the study of pathogenesis and for potential intervention therapies. The objective of the present study was to characterize lung dysfunction (low PaO2 and high PaCO2, and increased cellular infiltration, protein extravasation, and malondialdehyde (MDA) production assessed in bronchoalveolar lavage) in a sepsis model consisting of intraperitoneal (ip) injection of Escherichia coli and the protective effects of pentoxifylline (PTX). Male Wistar rats (weighing between 270 and 350 g) were injected ip with 10(7) or 10(9) CFU/100 g body weight or saline and samples were collected 2, 6, 12, and 24 h later (N = 5 each group). PaO2, PaCO2 and pH were measured in blood, and cellular influx, protein extravasation and MDA concentration were measured in bronchoalveolar lavage. In a second set of experiments either PTX or saline was administered 1 h prior to E. coli ip injection (N = 5 each group) and the animals were observed for 6 h. Injection of 10(7) or 10(9) CFU/100 g body weight of E. coli induced acidosis, hypoxemia, and hypercapnia. An increased (P < 0.05) cell influx was observed in bronchoalveolar lavage, with a predominance of neutrophils. Total protein and MDA concentrations were also higher (P < 0.05) in the septic groups compared to control. A higher tumor necrosis factor-alpha (P < 0.05) concentration was also found in these animals. Changes in all parameters were more pronounced with the higher bacterial inoculum. PTX administered prior to sepsis reduced (P < 0.05) most functional alterations. These data show that an E. coli ip inoculum is a good model for the induction of lung dysfunction in sepsis, and suitable for studies of therapeutic interventions.