920 resultados para Sensitivity Analysis
Resumo:
Inverters play key roles in connecting sustainable energy (SE) sources to the local loads and the ac grid. Although there has been a rapid expansion in the use of renewable sources in recent years, fundamental research, on the design of inverters that are specialized for use in these systems, is still needed. Recent advances in power electronics have led to proposing new topologies and switching patterns for single-stage power conversion, which are appropriate for SE sources and energy storage devices. The current source inverter (CSI) topology, along with a newly proposed switching pattern, is capable of converting the low dc voltage to the line ac in only one stage. Simple implementation and high reliability, together with the potential advantages of higher efficiency and lower cost, turns the so-called, single-stage boost inverter (SSBI), into a viable competitor to the existing SE-based power conversion technologies.^ The dynamic model is one of the most essential requirements for performance analysis and control design of any engineering system. Thus, in order to have satisfactory operation, it is necessary to derive a dynamic model for the SSBI system. However, because of the switching behavior and nonlinear elements involved, analysis of the SSBI is a complicated task.^ This research applies the state-space averaging technique to the SSBI to develop the state-space-averaged model of the SSBI under stand-alone and grid-connected modes of operation. Then, a small-signal model is derived by means of the perturbation and linearization method. An experimental hardware set-up, including a laboratory-scaled prototype SSBI, is built and the validity of the obtained models is verified through simulation and experiments. Finally, an eigenvalue sensitivity analysis is performed to investigate the stability and dynamic behavior of the SSBI system over a typical range of operation. ^
Resumo:
Mathematical models are increasingly used in environmental science thus increasing the importance of uncertainty and sensitivity analyses. In the present study, an iterative parameter estimation and identifiability analysis methodology is applied to an atmospheric model – the Operational Street Pollution Model (OSPMr). To assess the predictive validity of the model, the data is split into an estimation and a prediction data set using two data splitting approaches and data preparation techniques (clustering and outlier detection) are analysed. The sensitivity analysis, being part of the identifiability analysis, showed that some model parameters were significantly more sensitive than others. The application of the determined optimal parameter values was shown to succesfully equilibrate the model biases among the individual streets and species. It was as well shown that the frequentist approach applied for the uncertainty calculations underestimated the parameter uncertainties. The model parameter uncertainty was qualitatively assessed to be significant, and reduction strategies were identified.
Resumo:
Thesis (Master's)--University of Washington, 2016-08
Resumo:
The asynchronous polyphase induction motor has been the motor of choice in industrial settings for about the past half century because power electronics can be used to control its output behavior. Before that, the dc motor was widely used because of its easy speed and torque controllability. The two main reasons why this might be are its ruggedness and low cost. The induction motor is a rugged machine because it is brushless and has fewer internal parts that need maintenance or replacement. This makes it low cost in comparison to other motors, such as the dc motor. Because of these facts, the induction motor and drive system have been gaining market share in industry and even in alternative applications such as hybrid electric vehicles and electric vehicles. The subject of this thesis is to ascertain various control algorithms’ advantages and disadvantages and give recommendations for their use under certain conditions and in distinct applications. Four drives will be compared as fairly as possible by comparing their parameter sensitivities, dynamic responses, and steady-state errors. Different switching techniques are used to show that the motor drive is separate from the switching scheme; changing the switching scheme produces entirely different responses for each motor drive.
Resumo:
The purpose of this thesis is to analyse the spatial and temporal variability of the aragonite saturation state (ΩAR), commonly used as an indicator of ocean acidification, in the North-East Atlantic. When the aragonite saturation state decreases below a certain threshold, ΩAR <1, calcifying organisms (i.e. molluscs, pteropods, foraminifera, crabs, etc.) are subject to dissolution of shells and aragonite structures. This objective agrees with the challenge 'Ocean, climate change and acidification' of the EU COST Ocean Governance for Sustainability project, which aims to combine the information collected on the state of health of the oceans. Two open-sources data products, EMODnet and GLODAPv2, have been integrated and analysed for the first time in the North-East Atlantic region. The integrated dataset contains 1038 ΩAR vertical profiles whose time distribution spans from 1970 to 2014. The ΩAR has been computed from CO2SYS software considering different combinations of input parameters, pH, Total Alkalinity (TAlk) and Dissolved Inorganic Carbon (DIC), associated with Temperature, Salinity and Pressure at in situ conditions. A sensitivity analysis has been performed to better understand the data consistency of ΩAR computed from the different combinations of pH, Talk and DIC and to verify the difference among observed TAlk and DIC parameters and their output values from the CO2SYS tool. Maps of ΩAR have been computed with the best data coverage obtained from the two datasets, at different levels of depth in the area of investigation and they have been compared to the work of Jiang et al. (2015). The results are consistent and show similar horizontal and vertical patterns. The study highlights some aragonite undersaturated values (ΩAR <1) below 500 meters depth, suggesting a potential effect of acidification in the considered time period. This thesis aims to be a preliminary work for future studies that will be able to design the ΩAR variability on a decadal distribution based on the extended time-series acquired in this work.
Resumo:
Increasing environmental awareness has been a significant driving force for innovations and process improvements in different sectors and the field of chemistry is not an outlier. Innovating around industrial chemical processes in line with current environmental responsibilities is however no mean feat. One of such hard to overhaul process is the production of methyl methacrylate (MMA) commonly produced via the acetone cyanohydrin (ACH) process developed back in the 1930s. Different alternatives to the ACH process have emerged over the years and the Alpha Lucite process has been particularly promising with a combined plant capacity of 370,000 metric tonnes in Singapore and Saudi Arabia. This study applied Life Cycle Assessment methodology to conduct a comparative analysis between the ACH and Lucite processes with the aim of ascertaining the effect of applying principles of green chemistry as a process improvement tool on overall environmental impacts. A further comparison was made between the Lucite process and a lab-scale process that is further improvement on the former, also based on green chemistry principles. Results showed that the Lucite process has higher impacts on resource scarcity and ecosystem health whereas the ACH process has higher impacts on human health. On the other hand, compared to the Lucite process the lab-scale process has higher impacts in both the ecosystem and human health categories with lower impacts only in the resource scarcity category. It was observed that the benefits of process improvements with green chemistry principles might not be apparent in some categories due to some limitations of the methodology. Process contribution analysis was also performed and it revealed that the contribution of energy is significant, therefore a sensitivity analysis with different energy scenarios was performed. An uncertainty analysis using Monte Carlo analysis was also performed to validate the consistency of the results in each of the comparisons.
Resumo:
Chemokines may contribute to local and systemic inflammation in patients with psoriasis. Previous studies have demonstrated the importance of chemokine ligands and receptors in the recruitment of T cells into psoriatic lesional skin and synovial fluid. The aim of this study was to evaluate the levels of Th1-related chemokines in psoriasis and to investigate any association with disease severity. We quantified serum levels of CXCL9, CXCL10 and CXCL16 and the frequencies of CD4+CXCR3+ T lymphocytes through ELISA and flow cytometry, respectively. A total of 38 patients with psoriasis and 33 controls were included. There were no significant differences in chemokine levels between psoriasis and control groups. Patients with psoriatic arthritis had lower median level of CXCL10 when compared with controls (p=0.03). There were no significant correlations between serum chemokines analyzed and disease severity. Frequencies of CD4+CXCR3+ T cells were lower in patients with psoriasis than in controls (p<0.01). A sensitivity analysis excluding patients on systemic therapy yielded similar results. Serum concentrations of CXCL9, CXCL10 and CXCL16 were not increased in the psoriasis group or correlated with disease severity. Systemic levels of chemokine ligands do not seem to be sensitive biomarkers of disease activity or accurate parameters to predict response to therapy. Frequencies of CD4+CXCR3+ T cells were decreased in the peripheral blood of psoriasis patients, possibly due to recruitment to inflammatory lesions.
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
Background: In areas with limited structure in place for microscopy diagnosis, rapid diagnostic tests (RDT) have been demonstrated to be effective. Method: The cost-effectiveness of the Optimal (R) and thick smear microscopy was estimated and compared. Data were collected on remote areas of 12 municipalities in the Brazilian Amazon. Data sources included the National Malaria Control Programme of the Ministry of Health, the National Healthcare System reimbursement table, hospitalization records, primary data collected from the municipalities, and scientific literature. The perspective was that of the Brazilian public health system, the analytical horizon was from the start of fever until the diagnostic results provided to patient and the temporal reference was that of year 2006. The results were expressed in costs per adequately diagnosed cases in 2006 U. S. dollars. Sensitivity analysis was performed considering key model parameters. Results: In the case base scenario, considering 92% and 95% sensitivity for thick smear microscopy to Plasmodium falciparum and Plasmodium vivax, respectively, and 100% specificity for both species, thick smear microscopy is more costly and more effective, with an incremental cost estimated at US$ 549.9 per adequately diagnosed case. In sensitivity analysis, when sensitivity and specificity of microscopy for P. vivax were 0.90 and 0.98, respectively, and when its sensitivity for P. falciparum was 0.83, the RDT was more cost-effective than microscopy. Conclusion: Microscopy is more cost-effective than OptiMal (R) in these remote areas if high accuracy of microscopy is maintained in the field. Decision regarding use of rapid tests for diagnosis of malaria in these areas depends on current microscopy accuracy in the field.
Resumo:
This paper presents a new methodology to estimate harmonic distortions in a power system, based on measurements of a limited number of given sites. The algorithm utilizes evolutionary strategies (ES), a development branch of evolutionary algorithms. The main advantage in using such a technique relies upon its modeling facilities as well as its potential to solve fairly complex problems. The problem-solving algorithm herein proposed makes use of data from various power-quality (PQ) meters, which can either be synchronized by high technology global positioning system devices or by using information from a fundamental frequency load flow. This second approach makes the overall PQ monitoring system much less costly. The algorithm is applied to an IEEE test network, for which sensitivity analysis is performed to determine how the parameters of the ES can be selected so that the algorithm performs in an effective way. Case studies show fairly promising results and the robustness of the proposed method.
Resumo:
The proposed method to analyze the composition of the cost of electricity is based on the energy conversion processes and the destruction of the exergy through the several thermodynamic processes that comprise a combined cycle power plant. The method uses thermoeconomics to evaluate and allocate the cost of exergy throughout the processes, considering costs related to inputs and investment in equipment. Although the concept may be applied to any combined cycle or cogeneration plant, this work develops only the mathematical modeling for three-pressure heat recovery steam generator (HRSG) configurations and total condensation of the produced steam. It is possible to study any n x 1 plant configuration (n sets of gas turbine and HRSGs associated to one steam turbine generator and condenser) with the developed model, assuming that every train operates identically and in steady state. The presented model was conceived from a complex configuration of a real power plant, over which variations may be applied in order to adapt it to a defined configuration under study [Borelli SJS. Method for the analysis of the composition of electricity costs in combined cycle thermoelectric power plants. Master in Energy Dissertation, Interdisciplinary Program of Energy, Institute of Eletro-technical and Energy, University of Sao Paulo, Sao Paulo, Brazil, 2005 (in Portuguese)]. The variations and adaptations include, for instance, use of reheat, supplementary firing and partial load operation. It is also possible to undertake sensitivity analysis on geometrical equipment parameters. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Sensors and actuators based on piezoelectric plates have shown increasing demand in the field of smart structures, including the development of actuators for cooling and fluid-pumping applications and transducers for novel energy-harvesting devices. This project involves the development of a topology optimization formulation for dynamic design of piezoelectric laminated plates aiming at piezoelectric sensors, actuators and energy-harvesting applications. It distributes piezoelectric material over a metallic plate in order to achieve a desired dynamic behavior with specified resonance frequencies, modes, and enhanced electromechanical coupling factor (EMCC). The finite element employs a piezoelectric plate based on the MITC formulation, which is reliable, efficient and avoids the shear locking problem. The topology optimization formulation is based on the PEMAP-P model combined with the RAMP model, where the design variables are the pseudo-densities that describe the amount of piezoelectric material at each finite element and its polarization sign. The design problem formulated aims at designing simultaneously an eigenshape, i.e., maximizing and minimizing vibration amplitudes at certain points of the structure in a given eigenmode, while tuning the eigenvalue to a desired value and also maximizing its EMCC, so that the energy conversion is maximized for that mode. The optimization problem is solved by using sequential linear programming. Through this formulation, a design with enhancing energy conversion in the low-frequency spectrum is obtained, by minimizing a set of first eigenvalues, enhancing their corresponding eigenshapes while maximizing their EMCCs, which can be considered an approach to the design of energy-harvesting devices. The implementation of the topology optimization algorithm and some results are presented to illustrate the method.
Resumo:
The objective of this paper is to develop and validate a mechanistic model for the degradation of phenol by the Fenton process. Experiments were performed in semi-batch operation, in which phenol, catechol and hydroquinone concentrations were measured. Using the methodology described in Pontes and Pinto [R.F.F. Pontes, J.M. Pinto, Analysis of integrated kinetic and flow models for anaerobic digesters, Chemical Engineering journal 122 (1-2) (2006) 65-80], a stoichiometric model was first developed, with 53 reactions and 26 compounds, followed by the corresponding kinetic model. Sensitivity analysis was performed to determine the most influential kinetic parameters of the model that were estimated with the obtained experimental results. The adjusted model was used to analyze the impact of the initial concentration and flow rate of reactants on the efficiency of the Fenton process to degrade phenol. Moreover, the model was applied to evaluate the treatment cost of wastewater contaminated with phenol in order to meet environmental standards. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
The objective of this paper is to develop a mathematical model for the synthesis of anaerobic digester networks based on the optimization of a superstructure that relies on a non-linear programming formulation. The proposed model contains the kinetic and hydraulic equations developed by Pontes and Pinto [Chemical Engineering journal 122 (2006) 65-80] for two types of digesters, namely UASB (Upflow Anaerobic Sludge Blanket) and EGSB (Expanded Granular Sludge Bed) reactors. The objective function minimizes the overall sum of the reactor volumes. The optimization results show that a recycle stream is only effective in case of a reactor with short-circuit, such as the UASB reactor. Sensitivity analysis was performed in the one and two-digester network superstructures, for the following parameters: UASB reactor short-circuit fraction and the EGSB reactor maximum organic load, and the corresponding results vary considerably in terms of digester volumes. Scenarios for three and four-digester network superstructures were optimized and compared with the results from fewer digesters. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
The inverse Weibull distribution has the ability to model failure rates which are quite common in reliability and biological studies. A three-parameter generalized inverse Weibull distribution with decreasing and unimodal failure rate is introduced and studied. We provide a comprehensive treatment of the mathematical properties of the new distribution including expressions for the moment generating function and the rth generalized moment. The mixture model of two generalized inverse Weibull distributions is investigated. The identifiability property of the mixture model is demonstrated. For the first time, we propose a location-scale regression model based on the log-generalized inverse Weibull distribution for modeling lifetime data. In addition, we develop some diagnostic tools for sensitivity analysis. Two applications of real data are given to illustrate the potentiality of the proposed regression model.