981 resultados para Simplified Models.
Resumo:
Even though titanium dioxide photocatalysis has been promoted as a leading green technology for water purification, many issues have hindered its application on a large commercial scale. For the materials scientist the main issues have centred the synthesis of more efficient materials and the investigation of degradation mechanisms; whereas for the engineers the main issues have been the development of appropriate models and the evaluation of intrinsic kinetics parameters that allow the scale up or re-design of efficient large-scale photocatalytic reactors. In order to obtain intrinsic kinetics parameters the reaction must be analysed and modelled considering the influence of the radiation field, pollutant concentrations and fluid dynamics. In this way, the obtained kinetic parameters are independent of the reactor size and configuration and can be subsequently used for scale-up purposes or for the development of entirely new reactor designs. This work investigates the intrinsic kinetics of phenol degradation over titania film due to the practicality of a fixed film configuration over a slurry. A flat plate reactor was designed in order to be able to control reaction parameters that include the UV irradiance, flow rates, pollutant concentration and temperature. Particular attention was paid to the investigation of the radiation field over the reactive surface and to the issue of mass transfer limited reactions. The ability of different emission models to describe the radiation field was investigated and compared to actinometric measurements. The RAD-LSI model was found to give the best predictions over the conditions tested. Mass transfer issues often limit fixed film reactors. The influence of this phenomenon was investigated with specifically planned sets of benzoic acid experiments and with the adoption of the stagnant film model. The phenol mass transfer coefficient in the system was calculated to be km,phenol=8.5815x10-7Re0.65(ms-1). The data obtained from a wide range of experimental conditions, together with an appropriate model of the system, has enabled determination of intrinsic kinetic parameters. The experiments were performed in four different irradiation levels (70.7, 57.9, 37.1 and 20.4 W m-2) and combined with three different initial phenol concentrations (20, 40 and 80 ppm) to give a wide range of final pollutant conversions (from 22% to 85%). The simple model adopted was able to fit the wide range of conditions with only four kinetic parameters; two reaction rate constants (one for phenol and one for the family of intermediates) and their corresponding adsorption constants. The intrinsic kinetic parameters values were defined as kph = 0.5226 mmol m-1 s-1 W-1, kI = 0.120 mmol m-1 s-1 W-1, Kph = 8.5 x 10-4 m3 mmol-1 and KI = 2.2 x 10-3 m3 mmol-1. The flat plate reactor allowed the investigation of the reaction under two different light configurations; liquid and substrate side illumination. The latter of particular interest for real world applications where light absorption due to turbidity and pollutants contained in the water stream to be treated could represent a significant issue. The two light configurations allowed the investigation of the effects of film thickness and the determination of the catalyst optimal thickness. The experimental investigation confirmed the predictions of a porous medium model developed to investigate the influence of diffusion, advection and photocatalytic phenomena inside the porous titania film, with the optimal thickness value individuated at 5 ìm. The model used the intrinsic kinetic parameters obtained from the flat plate reactor to predict the influence of thickness and transport phenomena on the final observed phenol conversion without using any correction factor; the excellent match between predictions and experimental results provided further proof of the quality of the parameters obtained with the proposed method.
Resumo:
The results of searches for supersymmetry by the CMS experiment are interpreted in the framework of simplified models. The results are based on data corresponding to an integrated luminosity of 4.73 to 4.98 fb-1. The data were collected at the LHC in proton-proton collisions at a center-of-mass energy of 7 TeV. This paper describes the method of interpretation and provides upper limits on the product of the production cross section and branching fraction as a function of new particle masses for a number of simplified models. These limits and the corresponding experimental acceptance calculations can be used to constrain other theoretical models and to compare different supersymmetry-inspired analyses. © 2013 CERN.
Resumo:
Shockley diode equation is basic for single diode model equation, which is overly used for characterizing the photovoltaic cell output and behavior. In the standard equation, it includes series resistance (Rs) and shunt resistance (Rsh) with different types of parameters. Maximum simulation and modeling work done previously, related to single diode photovoltaic cell used this equation. However, there is another form of the standard equation which has not included Series Resistance (Rs) and Shunt Resistance (Rsh) yet, as the Shunt Resistance is much bigger than the load resistance and the load resistance is much bigger than the Series Resistance. For this phenomena, very small power loss occurs within a photovoltaic cell. This research focuses on the comparison of two forms of basic Shockley diode equation. This analysis describes a deep understanding of the photovoltaic cell, as well as gives understanding about Series Resistance (Rs) and Shunt Resistance (Rsh) behavior in the Photovoltaic cell. For making estimation of a real time photovoltaic system, faster calculation is needed. The equation without Series Resistance and Shunt Resistance is appropriate for the real time environment. Error function for both Series resistance (Rs) and Shunt resistances (Rsh) have been analyzed which shows that the total system is not affected by this two parameters' behavior.
Resumo:
This paper compares a number of different moment-curvature models for cracked concrete sections that contain both steel and external fiber-reinforced polymer (FRP) reinforcement. The question of whether to use a whole-section analysis or one that considers the FRP separately is discussed. Five existing and three new models are compared with test data for moment-curvature or load deflection behavior, and five models are compared with test results for plate-end debonding using a global energy balance approach (GEBA). A proposal is made for the use of one of the simplified models. The availability of a simplified model opens the way to the production of design aids so that the GEBA can be made available to practicing engineers through design guides and parametric studies. Copyright © 2014, American Concrete Institute.
Resumo:
In the field of vehicle dynamics, commercial software can aid the designer during the conceptual and detailed design phases. Simulations using these tools can quickly provide specific design metrics, such as yaw and lateral velocity, for standard maneuvers. However, it remains challenging to correlate these metrics with empirical quantities that depend on many external parameters and design specifications. This scenario is the case with tire wear, which depends on the frictional work developed by the tire-road contact. In this study, an approach is proposed to estimate the tire-road friction during steady-state longitudinal and cornering maneuvers. Using this approach, a qualitative formula for tire wear evaluation is developed, and conceptual design analyses of cornering maneuvers are performed using simplified vehicle models. The influence of some design parameters such as cornering stiffness, the distance between the axles, and the steer angle ratio between the steering axles for vehicles with two steering axles is evaluated. The proposed methodology allows the designer to predict tire wear using simplified vehicle models during the conceptual design phase.
Resumo:
Routes of migration and exchange are important factors in the debate about how the Neolithic transition spread into Europe. Studying the genetic diversity of livestock can help in tracing back some of these past events. Notably, domestic goat (Capra hircus) did not have any wild progenitors (Capra aegagrus) in Europe before their arrival from the Near East. Studies of mitochondrial DNA have shown that the diversity in European domesticated goats is a subset of that in the wild, underlining the ancestral relationship between both populations. Additionally, an ancient DNA study on Neolithic goat remains has indicated that a high level of genetic diversity was already present early in the Neolithic in northwestern Mediterranean sites. We used coalescent simulations and approximate Bayesian computation, conditioned on patterns of modern and ancient mitochondrial DNA diversity in domesticated and wild goats, to test a series of simplified models of the goat domestication process. Specifically, we ask if domestic goats descend from populations that were distinct prior to domestication. Although the models we present require further analyses, preliminary results indicate that wild and domestic goats are more likely to descend from a single ancestral wild population that was managed 11,500 years before present, and that serial founding events characterise the spread of Capra hircus into Europe.
Resumo:
Demands for delivering high instantaneous power in a compressed form (pulse shape) have widely increased during recent decades. The flexible shapes with variable pulse specifications offered by pulsed power have made it a practical and effective supply method for an extensive range of applications. In particular, the release of basic subatomic particles (i.e. electron, proton and neutron) in an atom (ionization process) and the synthesizing of molecules to form ions or other molecules are among those reactions that necessitate large amount of instantaneous power. In addition to the decomposition process, there have recently been requests for pulsed power in other areas such as in the combination of molecules (i.e. fusion, material joining), gessoes radiations (i.e. electron beams, laser, and radar), explosions (i.e. concrete recycling), wastewater, exhausted gas, and material surface treatments. These pulses are widely employed in the silent discharge process in all types of materials (including gas, fluid and solid); in some cases, to form the plasma and consequently accelerate the associated process. Due to this fast growing demand for pulsed power in industrial and environmental applications, the exigency of having more efficient and flexible pulse modulators is now receiving greater consideration. Sensitive applications, such as plasma fusion and laser guns also require more precisely produced repetitive pulses with a higher quality. Many research studies are being conducted in different areas that need a flexible pulse modulator to vary pulse features to investigate the influence of these variations on the application. In addition, there is the need to prevent the waste of a considerable amount of energy caused by the arc phenomena that frequently occur after the plasma process. The control over power flow during the supply process is a critical skill that enables the pulse supply to halt the supply process at any stage. Different pulse modulators which utilise different accumulation techniques including Marx Generators (MG), Magnetic Pulse Compressors (MPC), Pulse Forming Networks (PFN) and Multistage Blumlein Lines (MBL) are currently employed to supply a wide range of applications. Gas/Magnetic switching technologies (such as spark gap and hydrogen thyratron) have conventionally been used as switching devices in pulse modulator structures because of their high voltage ratings and considerably low rising times. However, they also suffer from serious drawbacks such as, their low efficiency, reliability and repetition rate, and also their short life span. Being bulky, heavy and expensive are the other disadvantages associated with these devices. Recently developed solid-state switching technology is an appropriate substitution for these switching devices due to the benefits they bring to the pulse supplies. Besides being compact, efficient, reasonable and reliable, and having a long life span, their high frequency switching skill allows repetitive operation of pulsed power supply. The main concerns in using solid-state transistors are the voltage rating and the rising time of available switches that, in some cases, cannot satisfy the application’s requirements. However, there are several power electronics configurations and techniques that make solid-state utilisation feasible for high voltage pulse generation. Therefore, the design and development of novel methods and topologies with higher efficiency and flexibility for pulsed power generators have been considered as the main scope of this research work. This aim is pursued through several innovative proposals that can be classified under the following two principal objectives. • To innovate and develop novel solid-state based topologies for pulsed power generation • To improve available technologies that have the potential to accommodate solid-state technology by revising, reconfiguring and adjusting their structure and control algorithms. The quest to distinguish novel topologies for a proper pulsed power production was begun with a deep and through review of conventional pulse generators and useful power electronics topologies. As a result of this study, it appears that efficiency and flexibility are the most significant demands of plasma applications that have not been met by state-of-the-art methods. Many solid-state based configurations were considered and simulated in order to evaluate their potential to be utilised in the pulsed power area. Parts of this literature review are documented in Chapter 1 of this thesis. Current source topologies demonstrate valuable advantages in supplying the loads with capacitive characteristics such as plasma applications. To investigate the influence of switching transients associated with solid-state devices on rise time of pulses, simulation based studies have been undertaken. A variable current source is considered to pump different current levels to a capacitive load, and it was evident that dissimilar dv/dts are produced at the output. Thereby, transient effects on pulse rising time are denied regarding the evidence acquired from this examination. A detailed report of this study is given in Chapter 6 of this thesis. This study inspired the design of a solid-state based topology that take advantage of both current and voltage sources. A series of switch-resistor-capacitor units at the output splits the produced voltage to lower levels, so it can be shared by the switches. A smart but complicated switching strategy is also designed to discharge the residual energy after each supply cycle. To prevent reverse power flow and to reduce the complexity of the control algorithm in this system, the resistors in common paths of units are substituted with diode rectifiers (switch-diode-capacitor). This modification not only gives the feasibility of stopping the load supply process to the supplier at any stage (and consequently saving energy), but also enables the converter to operate in a two-stroke mode with asymmetrical capacitors. The components’ determination and exchanging energy calculations are accomplished with respect to application specifications and demands. Both topologies were simply modelled and simulation studies have been carried out with the simplified models. Experimental assessments were also executed on implemented hardware and the approaches verified the initial analysis. Reports on details of both converters are thoroughly discussed in Chapters 2 and 3 of the thesis. Conventional MGs have been recently modified to use solid-state transistors (i.e. Insulated gate bipolar transistors) instead of magnetic/gas switching devices. Resistive insulators previously used in their structures are substituted by diode rectifiers to adjust MGs for a proper voltage sharing. However, despite utilizing solid-state technology in MGs configurations, further design and control amendments can still be made to achieve an improved performance with fewer components. Considering a number of charging techniques, resonant phenomenon is adopted in a proposal to charge the capacitors. In addition to charging the capacitors at twice the input voltage, triggering switches at the moment at which the conducted current through switches is zero significantly reduces the switching losses. Another configuration is also introduced in this research for Marx topology based on commutation circuits that use a current source to charge the capacitors. According to this design, diode-capacitor units, each including two Marx stages, are connected in cascade through solid-state devices and aggregate the voltages across the capacitors to produce a high voltage pulse. The polarity of voltage across one capacitor in each unit is reversed in an intermediate mode by connecting the commutation circuit to the capacitor. The insulation of input side from load side is provided in this topology by disconnecting the load from the current source during the supply process. Furthermore, the number of required fast switching devices in both designs is reduced to half of the number used in a conventional MG; they are replaced with slower switches (such as Thyristors) that need simpler driving modules. In addition, the contributing switches in discharging paths are decreased to half; this decrease leads to a reduction in conduction losses. Associated models are simulated, and hardware tests are performed to verify the validity of proposed topologies. Chapters 4, 5 and 7 of the thesis present all relevant analysis and approaches according to these topologies.
Resumo:
Image representations derived from simplified models of the primary visual cortex (V1), such as HOG and SIFT, elicit good performance in a myriad of visual classification tasks including object recognition/detection, pedestrian detection and facial expression classification. A central question in the vision, learning and neuroscience communities regards why these architectures perform so well. In this paper, we offer a unique perspective to this question by subsuming the role of V1-inspired features directly within a linear support vector machine (SVM). We demonstrate that a specific class of such features in conjunction with a linear SVM can be reinterpreted as inducing a weighted margin on the Kronecker basis expansion of an image. This new viewpoint on the role of V1-inspired features allows us to answer fundamental questions on the uniqueness and redundancies of these features, and offer substantial improvements in terms of computational and storage efficiency.
Resumo:
Phylogenetic inference from sequences can be misled by both sampling (stochastic) error and systematic error (nonhistorical signals where reality differs from our simplified models). A recent study of eight yeast species using 106 concatenated genes from complete genomes showed that even small internal edges of a tree received 100% bootstrap support. This effective negation of stochastic error from large data sets is important, but longer sequences exacerbate the potential for biases (systematic error) to be positively misleading. Indeed, when we analyzed the same data set using minimum evolution optimality criteria, an alternative tree received 100% bootstrap support. We identified a compositional bias as responsible for this inconsistency and showed that it is reduced effectively by coding the nucleotides as purines and pyrimidines (RY-coding), reinforcing the original tree. Thus, a comprehensive exploration of potential systematic biases is still required, even though genome-scale data sets greatly reduce sampling error.
Resumo:
Light neutralino dark matter can be achieved in the Minimal Supersymmetric Standard Model if staus are rather light, with mass around 100 GeV. We perform a detailed analysis of the relevant supersymmetric parameter space, including also the possibility of light selectons and smuons, and of light higgsino- or wino-like charginos. In addition to the latest limits from direct and indirect detection of dark matter, ATLAS and CMS constraints on electroweak-inos and on sleptons are taken into account using a ``simplified models'' framework. Measurements of the properties of the Higgs boson at 125 GeV, which constrain amongst others the invisible decay of the Higgs boson into a pair of neutralinos, are also implemented in the analysis. We show that viable neutralino dark matter can be achieved for masses as low as 15 GeV. In this case, light charginos close to the LEP bound are required in addition to light right-chiral staus. Significant deviations are observed in the couplings of the 125 GeV Higgs boson. These constitute a promising way to probe the light neutralino dark matter scenario in the next run of the LHC. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
This paper proposes an analytical approach that is generalized for the design of various types of electric machines based on a physical magnetic circuit model. Conventional approaches have been used to predict the behavior of electric machines but have limitations in accurate flux saturation analysis and hence machine dimensioning at the initial design stage. In particular, magnetic saturation is generally ignored or compensated by correction factors in simplified models since it is difficult to determine the flux in each stator tooth for machines with any slot-pole combinations. In this paper, the flux produced by stator winding currents can be calculated accurately and rapidly for each stator tooth using the developed model, taking saturation into account. This aids machine dimensioning without the need for a computationally expensive finite element analysis (FEA). A 48-slot machine operated in induction and doubly-fed modes is used to demonstrate the proposed model. FEA is employed for verification.
Resumo:
The construction and LHC phenomenology of the razor variables MR, an event-by-event indicator of the heavy particle mass scale, and R, a dimensionless variable related to the transverse momentum imbalance of events and missing transverse energy, are presented. The variables are used in the analysis of the first proton-proton collisions dataset at CMS (35 pb-1) in a search for superpartners of the quarks and gluons, targeting indirect hints of dark matter candidates in the context of supersymmetric theoretical frameworks. The analysis produced the highest sensitivity results for SUSY to date and extended the LHC reach far beyond the previous Tevatron results. A generalized inclusive search is subsequently presented for new heavy particle pairs produced in √s = 7 TeV proton-proton collisions at the LHC using 4.7±0.1 fb-1 of integrated luminosity from the second LHC run of 2011. The selected events are analyzed in the 2D razor-space of MR and R and the analysis is performed in 12 tiers of all-hadronic, single and double leptons final states in the presence and absence of b-quarks, probing the third generation sector using the event heavy-flavor content. The search is sensitive to generic supersymmetry models with minimal assumptions about the superpartner decay chains. No excess is observed in the number or shape of event yields relative to Standard Model predictions. Exclusion limits are derived in the CMSSM framework with gluino masses up to 800 GeV and squark masses up to 1.35 TeV excluded at 95% confidence level, depending on the model parameters. The results are also interpreted for a collection of simplified models, in which gluinos are excluded with masses as large as 1.1 TeV, for small neutralino masses, and the first-two generation squarks, stops and sbottoms are excluded for masses up to about 800, 425 and 400 GeV, respectively.
With the discovery of a new boson by the CMS and ATLAS experiments in the γ-γ and 4 lepton final states, the identity of the putative Higgs candidate must be established through the measurements of its properties. The spin and quantum numbers are of particular importance, and we describe a method for measuring the JPC of this particle using the observed signal events in the H to ZZ* to 4 lepton channel developed before the discovery. Adaptations of the razor kinematic variables are introduced for the H to WW* to 2 lepton/2 neutrino channel, improving the resonance mass resolution and increasing the discovery significance. The prospects for incorporating this channel in an examination of the new boson JPC is discussed, with indications that this it could provide complementary information to the H to ZZ* to 4 lepton final state, particularly for measuring CP-violation in these decays.
Resumo:
Many types of oceanic physical phenomena have a wide range in both space and time. In general, simplified models, such as shallow water model, are used to describe these oceanic motions. The shallow water equations are widely applied in various oceanic and atmospheric extents. By using the two-layer shallow water equations, the stratification effects can be considered too. In this research, the sixth-order combined compact method is investigated and numerically implemented as a high-order method to solve the two-layer shallow water equations. The second-order centered, fourth-order compact and sixth-order super compact finite difference methods are also used to spatial differencing of the equations. The first part of the present work is devoted to accuracy assessment of the sixth-order super compact finite difference method (SCFDM) and the sixth-order combined compact finite difference method (CCFDM) for spatial differencing of the linearized two-layer shallow water equations on the Arakawa's A-E and Randall's Z numerical grids. Two general discrete dispersion relations on different numerical grids, for inertia-gravity and Rossby waves, are derived. These general relations can be used for evaluation of the performance of any desired numerical scheme. For both inertia-gravity and Rossby waves, minimum error generally occurs on Z grid using either the sixth-order SCFDM or CCFDM methods. For the Randall's Z grid, the sixth-order CCFDM exhibits a substantial improvement , for the frequency of the barotropic and baroclinic modes of the linear inertia-gravity waves of the two layer shallow water model, over the sixth-order SCFDM. For the Rossby waves, the sixth-order SCFDM shows improvement, for the barotropic and baroclinic modes, over the sixth-order CCFDM method except on Arakawa's C grid. In the second part of the present work, the sixth-order CCFDM method is used to solve the one-layer and two-layer shallow water equations in their nonlinear form. In one-layer model with periodic boundaries, the performance of the methods for mass conservation is compared. The results show high accuracy of the sixth-order CCFDM method to simulate a complex flow field. Furthermore, to evaluate the performance of the method in a non-periodic domain the sixth-order CCFDM is applied to spatial differencing of vorticity-divergence-mass representation of one-layer shallow water equations to solve a wind-driven current problem with no-slip boundary conditions. The results show good agreement with published works. Finally, the performance of different schemes for spatial differencing of two-layer shallow water equations on Z grid with periodic boundaries is investigated. Results illustrate the high accuracy of combined compact method.
Resumo:
Este trabalho insere-se no âmbito de um estágio curricular realizado no gabinete de projetos SE2P, durante o qual foram desenvolvidas ferramentas de cálculo estrutural em situação de incêndio, integradas numa metodologia de trabalho que segue os princípios inerentes à tecnologia BIM (Building Information Modeling). Em particular foi implementado um procedimento de análise ao fogo segundo os modelos simplificados prescritos pelos Eurocódigos. Estes modelos garantem a segurança estrutural, permitindo, de forma rápida e eficiente, a determinação das necessidades de proteção passiva para diferentes cenários, tendo em vista a obtenção da solução mais económica. Esta dissertação, para além da apresentação do trabalho desenvolvido em regime de estágio curricular, objetivou dotar o leitor de um documento que introduza os principais conceitos relativos ao cálculo estrutural em situação de incêndio, indicando as várias opções de análise e respetivas vantagens e desvantagens, ajudando a definir a sua adequabilidade ao projeto em estudo. Neste contexto é efetuada uma introdução geral ao fenómeno do fogo e às medidas mais correntes de proteção, indicando-se os documentos normativos aplicáveis tanto ao cálculo estrutural como aos materiais de proteção. É também abordada a interação entre as várias normas que devem ser consultadas quando é efetuada uma análise ao fogo, e quais se aplicam a cada fase da análise. Efetua-se uma clara distinção entre a análise do comportamento térmico e mecânico, indicando-se as principais propriedades dos materiais em função do tipo de análise e a forma como são afetadas pela temperatura. No campo da análise do comportamento térmico faz-se essencialmente referência aos modelos de cálculo simplificados do desenvolvimento da temperatura em elementos metálicos e vigas mistas, com e sem proteção passiva. No que concerne ao campo da análise do comportamento mecânico são descritos os modelos de cálculo simplificados para a verificação da segurança estrutural atendendo às ações e combinações em situação de incêndio e à perda de resistência a temperaturas elevadas. Relativamente ao trabalho desenvolvido na SE2P, relativo ao desenvolvimento de ferramentas de cálculo e a sua implementação na análise ao fogo, realiza-se uma descrição detalhada de todo o processo, e da forma como se integra no conceito BIM, utilizando informações provenientes da modelação das estruturas e introduzindo novos dados ao modelo. Realizou-se também a aplicação de todo o procedimento de análise e das ferramentas desenvolvidas, a um caso de estudo baseado num edifício de habitação. Este caso de estudo serviu também para criar cenários de otimização utilizando-se referências de preços de mercado para o aço, sua transformação em fábrica e sistemas de proteção passiva, demonstrando-se a dificuldade em encontrar caminhos rápidos e diretos de decisão no processo de otimização.
Resumo:
Numa sociedade com elevado consumo energético, a dependência de combustíveis fósseis em evidente diminuição de disponibilidades é um tema cada vez mais preocupante, assim como a poluição atmosférica resultante da sua utilização. Existe, portanto, uma necessidade crescente de recorrer a energias renováveis e promover a otimização e utilização de recursos. A digestão anaeróbia (DA) de lamas é um processo de estabilização de lamas utilizado nas Estações de Tratamento de Águas Residuais (ETAR) e tem, como produtos finais, a lama digerida e o biogás. Maioritariamente constituído por gás metano, o biogás pode ser utilizado como fonte de energia, reduzindo, deste modo, a dependência energética da ETAR e a emissão de gases com efeito de estufa para a atmosfera. A otimização do processo de DA das lamas é essencial para o aumento da produção de biogás. No presente relatório de estágio, as Redes Neuronais Artificiais (RNA) foram aplicadas ao processo de DA de lamas de ETAR. As RNA são modelos simplificados inspirados no funcionamento das células neuronais humanas e que adquirem conhecimento através da experiência. Quando a RNA é criada e treinada, produz valores de output aproximadamente corretos para os inputs fornecidos. Uma vez que as DA são um processo bastante complexo, a sua otimização apresenta diversas dificuldades. Foi esse o motivo para recorrer a RNA na otimização da produção de biogás nos digestores das ETAR de Espinho e de Ílhavo da AdCL, utilizando o software NeuralToolsTM da PalisadeTM, contribuindo, desta forma, para a compreensão do processo e do impacto de algumas variáveis na produção de biogás.