840 resultados para parameter sensitivity analysis
Resumo:
41 p.
Resumo:
Esta tese tem por objetivo propor uma estratégia de obtenção automática de parâmetros hidrodinâmicos e de transporte através da solução de problemas inversos. A obtenção dos parâmetros de um modelo físico representa um dos principais problemas em sua calibração, e isso se deve em grande parte à dificuldade na medição em campo desses parâmetros. Em particular na modelagem de rios e estuários, a altura da rugosidade e o coeficiente de difusão turbulenta representam dois dos parâmetros com maior dificuldade de medição. Nesta tese é apresentada uma técnica automatizada de estimação desses parâmetros através deum problema inverso aplicado a um modelo do estuário do rio Macaé, localizado no norte do Rio de Janeiro. Para este estudo foi utilizada a plataforma MOHID, desenvolvida na Universidade Técnica de Lisboa, e que tem tido ampla aplicação na simulação de corpos hídricos. Foi realizada uma análise de sensibilidade das respostas do modelo com relação aos parâmetros de interesse. Verificou-se que a salinidade é uma variável sensível a ambos parâmetros. O problema inverso foi então resolvido utilizando vários métodos de otimização através do acoplamento da plataforma MOHID a códigos de otimização implementados em Fortran. O acoplamento foi realizado de forma a não alterar o código fonte do MOHID, possibilitando a utilização da ferramenta computacional aqui desenvolvida em qualquer versão dessa plataforma, bem como a sua alteração para o uso com outros simuladores. Os testes realizados confirmam a eficiência da técnica e apontam as melhores abordagens para uma rápida e precisa estimação dos parâmetros.
Resumo:
Um Estudo para a solução numérica do modelo de difusão com retenção, proposta por Bevilacqua et al. (2011), é apresentado, bem como uma formulação implícita para o problema inverso para a estimativa dos parâmetros envolvidos na formulação matemática do modelo. Através de um estudo minucioso da análise de sensibilidade e do cálculo do coeficiente de correlação de Pearson, são identificadas as chances de se obter sucesso na solução do problema inverso através do método determinístico de Levenberg-Marquardt e dos métodos estocásticos Algoritmo de Colisão de Partículas (Particle Collision Algorithm - PCA) e Evolução Diferencial (Differential Evolution - DE). São apresentados os resultados obtidos através destes três métodos de otimização para três casos de conjunto de parâmetros. Foi observada uma forte correlação entre dois destes três parâmetros, o que dificultou a estimativa simultânea dos mesmos. Porém, foi obtido sucesso nas estimativas individuais de cada parâmetro. Foram obtidos bons resultados para os fatores que multiplicam os termos diferenciais da equação que modela o fenômeno de difusão com retenção.
Resumo:
O aumento exponencial dos gastos em saúde demanda estudos econômicos que subsidiem as decisões de agentes públicos ou privados quanto à incorporação de novas tecnologias aos sistemas de saúde. A tomografia de emissão de pósitrons (PET) é uma tecnologia de imagem da área de medicina nuclear, de alto custo e difusão ainda recente no país. O nível de evidência científica acumulada em relação a seu uso no câncer pulmonar de células não pequenas (CPCNP) é significativo, com a tecnologia mostrando acurácia superior às técnicas de imagem convencionais no estadiamento mediastinal e à distância. Avaliação econômica realizada em 2013 aponta para seu custo-efetividade no estadiamento do CPCNP em comparação à estratégia atual de manejo baseada no uso da tomografia computadorizada, na perspectiva do SUS. Sua incorporação ao rol de procedimentos disponibilizados pelo SUS pelo Ministério da Saúde (MS) ocorreu em abril de 2014, mas ainda se desconhecem os impactos econômico-financeiros decorrentes desta decisão. Este estudo buscou estimar o impacto orçamentário (IO) da incorporação da tecnologia PET no estadiamento do CPCNP para os anos de 2014 a 2018, a partir da perspectiva do SUS como financiador da assistência à saúde. As estimativas foram calculadas pelo método epidemiológico e usaram como base modelo de decisão do estudo de custo-efetividade previamente realizado. Foram utilizados dados nacionais de incidência; de distribuição de doença e acurácia das tecnologias procedentes da literatura e de custos, de estudo de microcustos e das bases de dados do SUS. Duas estratégias de uso da nova tecnologia foram analisadas: (a) oferta da PET-TC a todos os pacientes; e (b) oferta restrita àqueles que apresentem resultados de TC prévia negativos. Adicionalmente, foram realizadas análises de sensibilidade univariadas e por cenários extremos, para avaliar a influência nos resultados de possíveis fontes de incertezas nos parâmetros utilizados. A incorporação da PET-TC ao SUS implicaria a necessidade de recursos adicionais de R$ 158,1 (oferta restrita) a 202,7 milhões (oferta abrangente) em cinco anos, e a diferença entre as duas estratégias de oferta é de R$ 44,6 milhões no período. Em termos absolutos, o IO total seria de R$ 555 milhões (PET-TC para TC negativa) e R$ 600 milhões (PET-TC para todos) no período. O custo do procedimento PET-TC foi o parâmetro de maior influência sobre as estimativas de gastos relacionados à nova tecnologia, seguido da proporção de pacientes submetidos à mediastinoscopia. No cenário por extremos mais otimista, os IOs incrementais reduzir-se-iam para R$ 86,9 (PET-TC para TC negativa) e R$ 103,9 milhões (PET-TC para todos), enquanto no mais pessimista os mesmos aumentariam para R$ 194,0 e R$ 242,2 milhões, respectivamente. Resultados sobre IO, aliados às evidências de custo-efetividade da tecnologia, conferem maior racionalidade às decisões finais dos gestores. A incorporação da PET no estadiamento clínico do CPCNP parece ser financeiramente factível frente à magnitude do orçamento do MS, e potencial redução no número de cirurgias desnecessárias pode levar à maior eficiência na alocação dos recursos disponíveis e melhores desfechos para os pacientes com estratégias terapêuticas mais bem indicadas.
Resumo:
Processos de produção precisam ser avaliados continuamente para que funcionem de modo mais eficaz e eficiente possível. Um conjunto de ferramentas utilizado para tal finalidade é denominado controle estatístico de processos (CEP). Através de ferramentas do CEP, o monitoramento pode ser realizado periodicamente. A ferramenta mais importante do CEP é o gráfico de controle. Nesta tese, foca-se no monitoramento de uma variável resposta, por meio dos parâmetros ou coeficientes de um modelo de regressão linear simples. Propõe-se gráficos de controle χ2 adaptativos para o monitoramento dos coeficientes do modelo de regressão linear simples. Mais especificamente, são desenvolvidos sete gráficos de controle χ2 adaptativos para o monitoramento de perfis lineares, a saber: gráfico com tamanho de amostra variável; intervalo de amostragem variável; limites de controle e de advertência variáveis; tamanho de amostra e intervalo de amostragem variáveis; tamanho de amostra e limites variáveis; intervalo de amostragem e limites variáveis e por fim, com todos os parâmetros de projeto variáveis. Medidas de desempenho dos gráficos propostos foram obtidas através de propriedades de cadeia de Markov, tanto para a situação zero-state como para a steady-state, verificando-se uma diminuição do tempo médio até um sinal no caso de desvios pequenos a moderados nos coeficientes do modelo de regressão do processo de produção. Os gráficos propostos foram aplicados a um exemplo de um processo de fabricação de semicondutores. Além disso, uma análise de sensibilidade dos mesmos é feita em função de desvios de diferentes magnitudes nos parâmetros do processo, a saber, no intercepto e na inclinação, comparando-se o desempenho entre os gráficos desenvolvidos e também com o gráfico χ2 com parâmetros fixos. Os gráficos propostos nesta tese são adequados para vários tipos de aplicações. Neste trabalho também foi considerado características de qualidade as quais são representadas por um modelo de regressão não-linear. Para o modelo de regressão não-linear considerado, a proposta é utilizar um método que divide o perfil não-linear em partes lineares, mais especificamente, um algoritmo para este fim, proposto na literatura, foi utilizado. Desta forma, foi possível validar a técnica proposta, mostrando que a mesma é robusta no sentido que permite tipos diferentes de perfis não-lineares. Aproxima-se, portanto um perfil não-linear por perfis lineares por partes, o que proporciona o monitoramento de cada perfil linear por gráficos de controle, como os gráficos de controle desenvolvidos nesta tese. Ademais apresenta-se a metodologia de decompor um perfil não-linear em partes lineares de forma detalhada e completa, abrindo espaço para ampla utilização.
Resumo:
Climate change is expected to have significant impact on the future thermal performance of buildings. Building simulation and sensitivity analysis can be employed to predict these impacts, guiding interventions to adapt buildings to future conditions. This article explores the use of simulation to study the impact of climate change on a theoretical office building in the UK, employing a probabilistic approach. The work studies (1) appropriate performance metrics and underlying modelling assumptions, (2) sensitivity of computational results to identify key design parameters and (3) the impact of zonal resolution. The conclusions highlight the importance of assumptions in the field of electricity conversion factors, proper management of internal heat gains, and the need to use an appropriately detailed zonal resolution. © 2010 Elsevier B.V. All rights reserved.
Resumo:
The uncertainty associated with a rainfall-runoff and non-point source loading (NPS) model can be attributed to both the parameterization and model structure. An interesting implication of the areal nature of NPS models is the direct relationship between model structure (i.e. sub-watershed size) and sample size for the parameterization of spatial data. The approach of this research is to find structural limitations in scale for the use of the conceptual NPS model, then examine the scales at which suitable stochastic depictions of key parameter sets can be generated. The overlapping regions are optimal (and possibly the only suitable regions) for conducting meaningful stochastic analysis with a given NPS model. Previous work has sought to find optimal scales for deterministic analysis (where, in fact, calibration can be adjusted to compensate for sub-optimal scale selection); however, analysis of stochastic suitability and uncertainty associated with both the conceptual model and the parameter set, as presented here, is novel; as is the strategy of delineating a watershed based on the uncertainty distribution. The results of this paper demonstrate a narrow range of acceptable model structure for stochastic analysis in the chosen NPS model. In the case examined, the uncertainties associated with parameterization and parameter sensitivity are shown to be outweighed in significance by those resulting from structural and conceptual decisions. © 2011 Copyright IAHS Press.
Resumo:
The diversity of non-domestic buildings at urban scale poses a number of difficulties to develop models for large scale analysis of the stock. This research proposes a probabilistic, engineering-based, bottom-up model to address these issues. In a recent study we classified London's non-domestic buildings based on the service they provide, such as offices, retail premise, and schools, and proposed the creation of one probabilistic representational model per building type. This paper investigates techniques for the development of such models. The representational model is a statistical surrogate of a dynamic energy simulation (ES) model. We first identify the main parameters affecting energy consumption in a particular building sector/type by using sampling-based global sensitivity analysis methods, and then generate statistical surrogate models of the dynamic ES model within the dominant model parameters. Given a sample of actual energy consumption for that sector, we use the surrogate model to infer the distribution of model parameters by inverse analysis. The inferred distributions of input parameters are able to quantify the relative benefits of alternative energy saving measures on an entire building sector with requisite quantification of uncertainties. Secondary school buildings are used for illustrating the application of this probabilistic method. © 2012 Elsevier B.V. All rights reserved.
Resumo:
This paper describes a computational study of lean premixed high pressure methane-air flames, using Computational Fluid Dynamics (CFD) together with a reactor network approach. A detailed chemical reaction mechanism is employed to predict pollutant concentrations, placing emphasis on nitrogen oxide emissions. The reacting flow field is divided into separate zones in which homogeneity of the physical and chemical conditions prevails. The defined zones are interconnected forming an Equivalent Reactor Network (ERN). Three flames are examined for which experimental data is available. Flame A is characterised by an equivalence ratio of 0.43 while Flames B and C are richer with equivalence ratios of 0.5 and 0.56 respectively. Computations are performed for a range of operating conditions, quantifying the effect in the emitted NOx levels. Model predictions are compared against the available experimental data. Sensitivity analysis is performed to investigate the effect of the network size, in order to define the optimum number of reactors for accurate predictions of the species mass fractions. © 2012 Elsevier Ltd. All rights reserved.
Resumo:
The circumstances are investigated under which high peak acceleration can occur in the internal parts of a system when subjected to impulsive driving on the outside. Previous work using a coupled beam model has highlighted the importance of veering pairs of modes. Such a veering pair can be approximated by a lumped system with two degrees of freedom. The worst case of acceleration amplification is shown to occur when the two oscillators are tuned to the same frequency, and for this case closed-form expressions are derived to show the parameter dependence of the acceleration ratio on the mass ratio and coupling strength. Sensitivity analysis of the eigenvalues and eigenvectors indicates that mass ratio is the most sensitive parameter for altering the veering behaviour in an undamped system. Non-proportional damping is also shown to have a strong influence on the veering behaviour. The study gives design guidelines to allow permissible acceleration levels to be achieved by the choice of the effective mass and damping of the indirectly driven subsystem relative to the directly driven subsystem. © 2013 Elsevier Ltd.
Resumo:
The ion-exchange equilibrium of bovine serum albumin (BSA) to an anion exchanger, DEAE Spherodex M, has been studied by batch adsorption experiments at pH values ranging from 5.26 to 7.6 and ionic strengths from 10 to 117.1 mmol/l. Using the unadjustable adsorption equilibrium parameters obtained from batch experiments, the applicability of the steric mass-action (SMA) model was analyzed for describing protein ion-exchange equilibrium in different buffer systems. The parametric sensitivity analysis was performed by perturbing each of the model parameters, while holding the rest constant. The simulation results showed that, at high salt concentrations or low pHs close to the isoelectric point of the protein, the precision of the model prediction decreased. Parametric sensitivity analysis showed that the characteristic charge and protein steric factor had the largest effects on ion-exchange equilibrium, while the effect of equilibrium constant was about 70%-95% smaller than those of characteristic charge and steric factor under all conditions investigated. The SMA model with the relationship between the adjusted characteristic charge and the salt concentration can well predict the protein adsorption isotherms in a wide pH range from 5.84 to 7.6. It is considered that the SMA model could be further improved by taking into account the effect of salt concentration on the intermolecular interactions of proteins. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
There has been an increased use of the Doubly-Fed Induction Machine (DFIM) in ac drive applications in recent times, particularly in the field of renewable energy systems and other high power variable-speed drives. The DFIM is widely regarded as the optimal generation system for both onshore and offshore wind turbines and has also been considered in wave power applications. Wind power generation is the most mature renewable technology. However, wave energy has attracted a large interest recently as the potential for power extraction is very significant. Various wave energy converter (WEC) technologies currently exist with the oscillating water column (OWC) type converter being one of the most advanced. There are fundemental differences in the power profile of the pneumatic power supplied by the OWC WEC and that of a wind turbine and this causes significant challenges in the selection and rating of electrical generators for the OWC devises. The thesis initially aims to provide an accurate per-phase equivalent circuit model of the DFIM by investigating various characterisation testing procedures. Novel testing methodologies based on the series-coupling tests is employed and is found to provide a more accurate representation of the DFIM than the standard IEEE testing methods because the series-coupling tests provide a direct method of determining the equivalent-circuit resistances and inductances of the machine. A second novel method known as the extended short-circuit test is also presented and investigated as an alternative characterisation method. Experimental results on a 1.1 kW DFIM and a 30 kW DFIM utilising the various characterisation procedures are presented in the thesis. The various test methods are analysed and validated through comparison of model predictions and torque-versus-speed curves for each induction machine. Sensitivity analysis is also used as a means of quantifying the effect of experimental error on the results taken from each of the testing procedures and is used to determine the suitability of the test procedures for characterising each of the devices. The series-coupling differential test is demonstrated to be the optimum test. The research then focuses on the OWC WEC and the modelling of this device. A software model is implemented based on data obtained from a scaled prototype device situated at the Irish test site. Test data from the electrical system of the device is analysed and this data is used to develop a performance curve for the air turbine utilised in the WEC. This performance curve was applied in a software model to represent the turbine in the electro-mechanical system and the software results are validated by the measured electrical output data from the prototype test device. Finally, once both the DFIM and OWC WEC power take-off system have been modeled succesfully, an investigation of the application of the DFIM to the OWC WEC model is carried out to determine the electrical machine rating required for the pulsating power derived from OWC WEC device. Thermal analysis of a 30 kW induction machine is carried out using a first-order thermal model. The simulations quantify the limits of operation of the machine and enable thedevelopment of rating requirements for the electrical generation system of the OWC WEC. The thesis can be considered to have three sections. The first section of the thesis contains Chapters 2 and 3 and focuses on the accurate characterisation of the doubly-fed induction machine using various testing procedures. The second section, containing Chapter 4, concentrates on the modelling of the OWC WEC power-takeoff with particular focus on the Wells turbine. Validation of this model is carried out through comparision of simulations and experimental measurements. The third section of the thesis utilises the OWC WEC model from Chapter 4 with a 30 kW induction machine model to determine the optimum device rating for the specified machine. Simulations are carried out to perform thermal analysis of the machine to give a general insight into electrical machine rating for an OWC WEC device.
Resumo:
Background: Elective repeat caesarean delivery (ERCD) rates have been increasing worldwide, thus prompting obstetric discourse on the risks and benefits for the mother and infant. Yet, these increasing rates also have major economic implications for the health care system. Given the dearth of information on the cost-effectiveness related to mode of delivery, the aim of this paper was to perform an economic evaluation on the costs and short-term maternal health consequences associated with a trial of labour after one previous caesarean delivery compared with ERCD for low risk women in Ireland.Methods: Using a decision analytic model, a cost-effectiveness analysis (CEA) was performed where the measure of health gain was quality-adjusted life years (QALYs) over a six-week time horizon. A review of international literature was conducted to derive representative estimates of adverse maternal health outcomes following a trial of labour after caesarean (TOLAC) and ERCD. Delivery/procedure costs derived from primary data collection and combined both "bottom-up" and "top-down" costing estimations.Results: Maternal morbidities emerged in twice as many cases in the TOLAC group than the ERCD group. However, a TOLAC was found to be the most-effective method of delivery because it was substantially less expensive than ERCD ((sic)1,835.06 versus (sic)4,039.87 per women, respectively), and QALYs were modestly higher (0.84 versus 0.70). Our findings were supported by probabilistic sensitivity analysis.Conclusions: Clinicians need to be well informed of the benefits and risks of TOLAC among low risk women. Ideally, clinician-patient discourse would address differences in length of hospital stay and postpartum recovery time. While it is premature advocate a policy of TOLAC across maternity units, the results of the study prompt further analysis and repeat iterations, encouraging future studies to synthesis previous research and new and relevant evidence under a single comprehensive decision model.
Resumo:
We applied coincident Earth observation data collected during 2008 and 2009 from multiple sensors (RA2, AATSR and MERIS, mounted on the European Space Agency satellite Envisat) to characterise environmental conditions and integrated sea-air fluxes of CO2 in three Arctic seas (Greenland, Barents, Kara). We assessed net CO2 sink sensitivity due to changes in temperature, salinity and sea ice duration arising from future climate scenarios. During the study period the Greenland and Barents seas were net sinks for atmospheric CO2, with integrated sea-air fluxes of -36 +/- 14 and -11 +/- 5 Tg C yr(-1), respectively, and the Kara Sea was a weak net CO2 source with an integrated sea-air flux of +2.2 +/- 1.4 TgC yr(-1). The combined integrated CO2 sea-air flux from all three was -45 +/- 18 TgC yr(-1). In a sensitivity analysis we varied temperature, salinity and sea ice duration. Variations in temperature and salinity led to modification of the transfer velocity, solubility and partial pressure of CO2 taking into account the resultant variations in alkalinity and dissolved organic carbon (DOC). Our results showed that warming had a strong positive effect on the annual integrated sea-air flux of CO2 (i.e. reducing the sink), freshening had a strong negative effect and reduced sea ice duration had a small but measurable positive effect. In the climate change scenario examined, the effects of warming in just over a decade of climate change up to 2020 outweighed the combined effects of freshening and reduced sea ice duration. Collectively these effects gave an integrated sea-air flux change of +4.0 TgC in the Greenland Sea, +6.0 Tg C in the Barents Sea and +1.7 Tg C in the Kara Sea, reducing the Greenland and Barents sinks by 11% and 53 %, respectively, and increasing the weak Kara Sea source by 81 %. Overall, the regional integrated flux changed by +11.7 Tg C, which is a 26% reduction in the regional sink. In terms of CO2 sink strength, we conclude that the Barents Sea is the most susceptible of the three regions to the climate changes examined. Our results imply that the region will cease to be a net CO2 sink in the 2050s.
Resumo:
We used coincident Envisat RA2 and AATSR temperature and wind speed data from 2008/2009 to calculate the global net sea-air flux of dimethyl sulfide (DMS), which we estimate to be 19.6 Tg S a21. Our monthly flux calculations are compared to open ocean eddy correlation measurements of DMS flux from 10 recent cruises, with a root mean square difference of 3.1 lmol m22 day21. In a sensitivity analysis, we varied temperature, salinity, surface wind speed, and aqueous DMS concentration, using fixed global changes as well as CMIP5 model output. The range of DMS flux in future climate scenarios is discussed. The CMIP5 model predicts a reduction in surface wind speed and we estimate that this will decrease the global annual sea-air flux of DMS by 22% over 25 years. Concurrent changes in temperature, salinity, and DMS concentration increase the global flux by much smaller amounts. The net effect of all CMIP5 modelled 25 year predictions was a 19% reduction in global DMS flux. 25 year DMS concentration changes had significant regional effects, some positive (Southern Ocean, North Atlantic, Northwest Pacific) and some negative (isolated regions along the Equator and in the Indian Ocean). Using satellite-detected coverage of coccolithophore blooms, our estimate of their contribution to North Atlantic DMS emissions suggests that the coccolithophores contribute only a small percentage of the North Atlantic annual flux estimate, but may be more important in the summertime and in the northeast Atlantic.