907 resultados para Energy-efficient
Resumo:
Tropical countries, such as Brazil and Colombia, have the possibility of using agricultural lands for growing biomass to produce bio-fuels such as biodiesel and ethanol. This study applies an energy analysis to the production process of anhydrous ethanol obtained from the hydrolysis of starch and cellulosic and hemicellulosic material present in the banana fruit and its residual biomass. Four different production routes were analyzed: acid hydrolysis of amylaceous material (banana pulp and banana fruit) and enzymatic hydrolysis of lignocellulosic material (flower stalk and banana skin). The analysis considered banana plant cultivation, feedstock transport, hydrolysis, fermentation, distillation, dehydration, residue treatment and utility plant. The best indexes were obtained for amylaceous material for which mass performance varied from 346.5 L/t to 388.7 L/t, Net Energy Value (NEV) ranged from 9.86 MJ/L to 9.94 MJ/L and the energy ratio was 1.9 MJ/MJ. For lignocellulosic materials, the figures were less favorable: mass performance varied from 86.1 to 123.5 L/t, NEV from 5.24 10 8.79 MJ/L and energy ratio from 1.3 to 1.6 MJ/MJ. The analysis showed, however, that both processes can be considered energetically feasible. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
An alternative for ethanol production, is the use of vegetable waste, such as excess of banana production, that are evaluated in 2,400,000 t/year, which includes: residual banana fruit and lignocellulosic material. This paper analyzes the energetic and exergetic behavior to carry the process developed at laboratory scale to a plant processing of banana for the ethanol production, involving: growing and transport of the vegetable material, hydrolysis of banana fruit, sugar fermentation, ethanol distillation and utility plant. Finally, energy and exergy indicators are obtained. The results show a positive energy balance when banana fruit is used for ethanol production, but some process modification must be done looking for improving the exergetic efficiency in ethanol production.
Resumo:
Wear behavior of coatings has usually been described in terms of mechanical properties such as hardness (H) and effective elastic modulus (E*). Alternatively, an energy approach appears as a promising analysis taking into account the influence of those properties. In a nanoindentation test, the dissipated energy depends not only on the hardness and elastic modulus, but also on the elastic recovery (W(e)). This work aims to establish a relation between plastic deformation energy (E(p)) during depth-sensing indentation method and the grooving resistance of coatings in nanoscratch tests. An energy dissipation coefficient (K(d)) was defined, calculated as the ratio of the plastic to the total deformation energy (E(p)/E(t)), which represents the energy dissipation of materials. Reactive depositions using titanium as the target and nitrogen and methane as reactive gases were obtained by triode magnetron sputtering, in order to assess wear and nanoindentation data. A topographical, chemical and microstructural characterization has been conducted using X-ray diffraction (XRD), X-ray photoelectron spectroscopy (XPS), wave dispersion spectroscopy (WDS), scanning electron (SEM) and atomic force microscopy (AFM) techniques. Nanoscratch results showed that the groove depth was well correlated to the energy dissipation coefficient of the coatings. On the other hand, a reduction in the coefficient was found when the elastic recovery was increased. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
There are several ways to attempt to model a building and its heat gains from external sources as well as internal ones in order to evaluate a proper operation, audit retrofit actions, and forecast energy consumption. Different techniques, varying from simple regression to models that are based on physical principles, can be used for simulation. A frequent hypothesis for all these models is that the input variables should be based on realistic data when they are available, otherwise the evaluation of energy consumption might be highly under or over estimated. In this paper, a comparison is made between a simple model based on artificial neural network (ANN) and a model that is based on physical principles (EnergyPlus) as an auditing and predicting tool in order to forecast building energy consumption. The Administration Building of the University of Sao Paulo is used as a case study. The building energy consumption profiles are collected as well as the campus meteorological data. Results show that both models are suitable for energy consumption forecast. Additionally, a parametric analysis is carried out for the considered building on EnergyPlus in order to evaluate the influence of several parameters such as the building profile occupation and weather data on such forecasting. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Sensors and actuators based on piezoelectric plates have shown increasing demand in the field of smart structures, including the development of actuators for cooling and fluid-pumping applications and transducers for novel energy-harvesting devices. This project involves the development of a topology optimization formulation for dynamic design of piezoelectric laminated plates aiming at piezoelectric sensors, actuators and energy-harvesting applications. It distributes piezoelectric material over a metallic plate in order to achieve a desired dynamic behavior with specified resonance frequencies, modes, and enhanced electromechanical coupling factor (EMCC). The finite element employs a piezoelectric plate based on the MITC formulation, which is reliable, efficient and avoids the shear locking problem. The topology optimization formulation is based on the PEMAP-P model combined with the RAMP model, where the design variables are the pseudo-densities that describe the amount of piezoelectric material at each finite element and its polarization sign. The design problem formulated aims at designing simultaneously an eigenshape, i.e., maximizing and minimizing vibration amplitudes at certain points of the structure in a given eigenmode, while tuning the eigenvalue to a desired value and also maximizing its EMCC, so that the energy conversion is maximized for that mode. The optimization problem is solved by using sequential linear programming. Through this formulation, a design with enhancing energy conversion in the low-frequency spectrum is obtained, by minimizing a set of first eigenvalues, enhancing their corresponding eigenshapes while maximizing their EMCCs, which can be considered an approach to the design of energy-harvesting devices. The implementation of the topology optimization algorithm and some results are presented to illustrate the method.
Resumo:
Electrical impedance tomography (EIT) captures images of internal features of a body. Electrodes are attached to the boundary of the body, low intensity alternating currents are applied, and the resulting electric potentials are measured. Then, based on the measurements, an estimation algorithm obtains the three-dimensional internal admittivity distribution that corresponds to the image. One of the main goals of medical EIT is to achieve high resolution and an accurate result at low computational cost. However, when the finite element method (FEM) is employed and the corresponding mesh is refined to increase resolution and accuracy, the computational cost increases substantially, especially in the estimation of absolute admittivity distributions. Therefore, we consider in this work a fast iterative solver for the forward problem, which was previously reported in the context of structural optimization. We propose several improvements to this solver to increase its performance in the EIT context. The solver is based on the recycling of approximate invariant subspaces, and it is applied to reduce the EIT computation time for a constant and high resolution finite element mesh. In addition, we consider a powerful preconditioner and provide a detailed pseudocode for the improved iterative solver. The numerical results show the effectiveness of our approach: the proposed algorithm is faster than the preconditioned conjugate gradient (CG) algorithm. The results also show that even on a standard PC without parallelization, a high mesh resolution (more than 150,000 degrees of freedom) can be used for image estimation at a relatively low computational cost. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Twelve samples with different grain sizes were prepared by normal grain growth and by primary recrystallization, and the hysteresis dissipated energy was measured by a quasi-static method. Results showed a linear relation between hysteresis energy loss and the inverse of grain size, which is here called Mager`s law, for maximum inductions from 0.6 to 1.5 T, and a Steinmetz power law relation between hysteresis loss and maximum induction for all samples. The combined effect is better described by a Mager`s law where the coefficients follow Steinmetz law.
Resumo:
Nanomaterials have triggered excitement in both fundamental science and technological applications in several fields However, the same characteristic high interface area that is responsible for their unique properties causes unconventional instability, often leading to local collapsing during application Thermodynamically, this can be attributed to an increased contribution of the interface to the free energy, activating phenomena such as sintering and grain growth The lack of reliable interface energy data has restricted the development of conceptual models to allow the control of nanoparticle stability on a thermodynamic basis. Here we introduce a novel and accessible methodology to measure interface energy of nanoparticles exploiting the heat released during sintering to establish a quantitative relation between the solid solid and solid vapor interface energies. We exploited this method in MgO and ZnO nanoparticles and determined that the ratio between the solid solid and solid vapor interface energy is 11 for MgO and 0.7 for ZnO. We then discuss that this ratio is responsible for a thermodynamic metastable state that may prevent collapsing of nanoparticles and, therefore, may be used as a tool to design long-term stable nanoparticles.
The importance of the industrialization of Brazilian shale when faced with the world energy scenario
Resumo:
This article discusses the importance of the industrialization of Brazilian shale based on factors such as: security of the national energy system security, global oil geopoliticsl, resources available, production costs, oil prices, environmental impacts and the national oil reserves. The study shows that the industrialization of shale always arises when issues such as peak oil or its geopolitics appear as factors that raise the price of oil to unrealistic levels. The article concludes that in the Brazilian case, shale oil may be classified as a strategic resource, economically viable, currently in development by the success of the retorting technology for extraction of shale oil and the price of crude oil. The article presents the conclusion that shale may be the driving factor for the formation of a technology park in Sao Mateus do Sul, due to the city`s economic dependence on Petrosix.
Resumo:
A solar energy powered failing film evaporator with film promoter was developed for concentrating diluted solutions (industrial effluents). The procedure proposed here does not emit CO(2), making it a viable alternative to the method of concentrating solutions that uses vapor as a heat source and releases CO(2) from burning fuel oil in a furnace, in direct opposition to the carbon reduction agreement established by the Kyoto protocol. This novel device consists of the following components: a flat plate solar collector with adjustable inclination, a film promoter (adhering to the collector), a liquid distributor, a concentrate collector. and accessories. The evaporation rate of the device was found to be affected both by the inclination of the collector and by the feed flow. The meteorological variables cannot be controlled, but were monitored constantly to ascertain the behavior of the equipment in response to the variations occurring throughout the day. Higher efficiencies were attained when the inclination of the collector was adjusted monthly, showing up to 36.4% higher values than when the collector remained in a fixed position. (c) 2008 Elsevier Ltd. All rights reserved.
Resumo:
The solar driven photo-Fenton process for treating water containing phenol as a contaminant has been evaluated by means of pilot-scale experiments with a parabolic trough solar reactor (PTR). The effects of Fe(II) (0.04-1.0 mmol L(-1)), H(2)O(2) (7-270 mmol L(-1)), initial phenol concentration (100 and 500 mg C L(-1)), solar radiation, and operation mode (batch and fed-batch) on the process efficiency were investigated. More than 90% of the dissolved organic carbon (DOC) was removed within 3 hours of irradiation or less, a performance equivalent to that of artificially-irradiated reactors, indicating that solar light can be used either as an effective complementary or as an alternative source of photons for the photo-Fenton degradation process. A non-linear multivariable model based on a neural network was fit to the experimental results of batch-mode experiments in order to evaluate the relative importance of the process variables considered on the DOC removal over the reaction time. This included solar radiation, which is not a controlled variable. The observed behavior of the system in batch-mode was compared with fed-batch experiments carried out under similar conditions. The main contribution of the study consists of the results from experiments under different conditions and the discussion of the system behavior. Both constitute important information for the design and scale-up of solar radiation-based photodegradation processes.
Resumo:
Pitzer`s equation for the excess Gibbs energy of aqueous solutions of low-molecular electrolytes is extended to aqueous solutions of polyelectrolytes. The model retains the original form of Pitzer`s model (combining a long-range term, based on the Debye-Huckel equation, with a short-range term similar to the virial equation where the second osmotic virial coefficient depends on the ionic strength). The extension consists of two parts: at first, it is assumed that a constant fraction of the monomer units of the polyelectrolyte is dissociated, i.e., that fraction does not depend on the concentration of the polyelectrolyte, and at second, a modified expression for the ionic strength (wherein each charged monomer group is taken into account individually) is introduced. This modification is to account for the presence of charged polyelectrolyte chains, which cannot be regarded as punctual charges. The resulting equation was used to correlate osmotic coefficient data of aqueous solutions of a single polyelectrolyte as well as of binary mixtures of a single polyelectrolyte and a salt with low-molecular weight. It was additionally applied to correlate liquid-liquid equilibrium data of some aqueous two-phase systems that might form when a polyelectrolyte and another hydrophilic but neutral polymer are simultaneously dissolved in water. A good agreement between the experimental data and the correlation result is observed for all investigated systems. (c) 2008 Elsevier B.V. All rights reserved.
Resumo:
Modern Integrated Circuit (IC) design is characterized by a strong trend of Intellectual Property (IP) core integration into complex system-on-chip (SOC) architectures. These cores require thorough verification of their functionality to avoid erroneous behavior in the final device. Formal verification methods are capable of detecting any design bug. However, due to state explosion, their use remains limited to small circuits. Alternatively, simulation-based verification can explore hardware descriptions of any size, although the corresponding stimulus generation, as well as functional coverage definition, must be carefully planned to guarantee its efficacy. In general, static input space optimization methodologies have shown better efficiency and results than, for instance, Coverage Directed Verification (CDV) techniques, although they act on different facets of the monitored system and are not exclusive. This work presents a constrained-random simulation-based functional verification methodology where, on the basis of the Parameter Domains (PD) formalism, irrelevant and invalid test case scenarios are removed from the input space. To this purpose, a tool to automatically generate PD-based stimuli sources was developed. Additionally, we have developed a second tool to generate functional coverage models that fit exactly to the PD-based input space. Both the input stimuli and coverage model enhancements, resulted in a notable testbench efficiency increase, if compared to testbenches with traditional stimulation and coverage scenarios: 22% simulation time reduction when generating stimuli with our PD-based stimuli sources (still with a conventional coverage model), and 56% simulation time reduction when combining our stimuli sources with their corresponding, automatically generated, coverage models.
Resumo:
An algorithm inspired on ant behavior is developed in order to find out the topology of an electric energy distribution network with minimum power loss. The algorithm performance is investigated in hypothetical and actual circuits. When applied in an actual distribution system of a region of the State of Sao Paulo (Brazil), the solution found by the algorithm presents loss lower than the topology built by the concessionary company.
Resumo:
Hub-and-spoke networks are widely studied in the area of location theory. They arise in several contexts, including passenger airlines, postal and parcel delivery, and computer and telecommunication networks. Hub location problems usually involve three simultaneous decisions to be made: the optimal number of hub nodes, their locations and the allocation of the non-hub nodes to the hubs. In the uncapacitated single allocation hub location problem (USAHLP) hub nodes have no capacity constraints and non-hub nodes must be assigned to only one hub. In this paper, we propose three variants of a simple and efficient multi-start tabu search heuristic as well as a two-stage integrated tabu search heuristic to solve this problem. With multi-start heuristics, several different initial solutions are constructed and then improved by tabu search, while in the two-stage integrated heuristic tabu search is applied to improve both the locational and allocational part of the problem. Computational experiments using typical benchmark problems (Civil Aeronautics Board (CAB) and Australian Post (AP) data sets) as well as new and modified instances show that our approaches consistently return the optimal or best-known results in very short CPU times, thus allowing the possibility of efficiently solving larger instances of the USAHLP than those found in the literature. We also report the integer optimal solutions for all 80 CAB data set instances and the 12 AP instances up to 100 nodes, as well as for the corresponding new generated AP instances with reduced fixed costs. Published by Elsevier Ltd.