830 resultados para Global sensitivity analysis
Resumo:
PURPOSE: This systematic review aimed to report and explore the survival of dental veneers constructed from non-feldspathic porcelain over 5 and 10 years.
MATERIALS AND METHODS: A total of 4,294 articles were identified through a systematic search involving all databases in the Cochrane Library, MEDLINE (OVID), EMBASE, Web of Knowledge, specific journals (hand-search), conference proceedings, clinical trials registers, and collegiate contacts. Articles, abstracts, and gray literature were sought by two independent researchers. There were no language limitations. One hundred sixteen studies were identified for full-text assessment, with 10 included in the analysis (5 qualitative, 5 quantitative). Study characteristics and survival (Kaplan-Meier estimated cumulative survival and 95% confidence interval [CI]) were extracted or recalculated. A failed veneer was one which required an intervention that disrupted the original marginal integrity, had been partially or completely lost, or had lost retention more than twice. A meta-analysis and sensitivity analysis of Empress veneers was completed, with an assessment of statistical heterogeneity and publication bias. Clinical heterogeneity was explored for results of all veneering materials from included studies.
RESULTS: Within the 10 studies, veneers were fabricated with IPS Empress, IPS Empress 2, Cerinate, and Cerec computer-aided design/computer-assisted manufacture (CAD/CAM) materials VITA Mark I, VITA Mark II, Ivoclar ProCad. The meta-analysis showed the pooled estimate for Empress veneers to be 92.4% (95% CI: 89.8% to 95.0%) for 5-year survival and 66% to 94% (95% CI: 55% to 99%) for 10 years. Data regarding other non-feldspathic porcelain materials were lacking, with only a single study each reporting outcomes for Empress 2, Cerinate, and various Cerec porcelains over 5 years. The sensitivity analysis showed data from one study had an influencing and stabilizing effect on the 5-year pooled estimate.
CONCLUSION: The long-term outcome (> 5 years) of non-feldspathic porcelain veneers is sparsely reported in the literature. This systematic review indicates that the 5-year cumulative estimated survival for etchable non-feldspathic porcelain veneers is over 90%. Outcomes may prove clinically acceptable with time, but evidence remains lacking and the use of these materials for veneers remains experimental.
Resumo:
The performance of the Weather Research and Forecast (WRF) model in wind simulation was evaluated under different numerical and physical options for an area of Portugal, located in complex terrain and characterized by its significant wind energy resource. The grid nudging and integration time of the simulations were the tested numerical options. Since the goal is to simulate the near-surface wind, the physical parameterization schemes regarding the boundary layer were the ones under evaluation. Also, the influences of the local terrain complexity and simulation domain resolution on the model results were also studied. Data from three wind measuring stations located within the chosen area were compared with the model results, in terms of Root Mean Square Error, Standard Deviation Error and Bias. Wind speed histograms, occurrences and energy wind roses were also used for model evaluation. Globally, the model accurately reproduced the local wind regime, despite a significant underestimation of the wind speed. The wind direction is reasonably simulated by the model especially in wind regimes where there is a clear dominant sector, but in the presence of low wind speeds the characterization of the wind direction (observed and simulated) is very subjective and led to higher deviations between simulations and observations. Within the tested options, results show that the use of grid nudging in simulations that should not exceed an integration time of 2 days is the best numerical configuration, and the parameterization set composed by the physical schemes MM5–Yonsei University–Noah are the most suitable for this site. Results were poorer in sites with higher terrain complexity, mainly due to limitations of the terrain data supplied to the model. The increase of the simulation domain resolution alone is not enough to significantly improve the model performance. Results suggest that error minimization in the wind simulation can be achieved by testing and choosing a suitable numerical and physical configuration for the region of interest together with the use of high resolution terrain data, if available.
Resumo:
O conceito de atirantamento surgiu no contexto de promover a interação global dos edifícios, nomeadamente, estabelecer as referidas ligações, de modo a prevenir o derrubamento para o exterior das paredes de fachada, perante a ocorrência de ação sísmica ou assentamento das fundações. Neste sentido, o presente trabalho tem como objetivo, estudar o comportamento dos atirantamentos ancorados no plano perpendicular das fachadas, quando solicitados à tração. No entanto, como as alvenarias são elementos heterogéneos, houve necessidade de desarticular os atirantamentos e estudar cada uma das partes que o compõe: tirantes injetados em alvenarias e sistemas de ancoragem. Em primeiro lugar, foi elaborado um estudo preliminar sobre tirantes injetados em alvenarias, o qual incidiu no seu dimensionamento, na análise de sensibilidade, apresentação de um caso de estudo e comparação de resultados. Numa segunda fase fez-se uma revisão bibliográfica dos tipos de Sistemas de Ancoragens mais comuns, onde foram mencionados alguns aspetos, nomeadamente a importância, o objetivo e condições da sua aplicação. Por último, associaram-se as duas componentes e foram estudados os Atirantamentos. Fezse um estudo da sua utilização e do seu interesse de aplicação. Foi também analisada uma forma de metodologia de dimensionamento, quando inseridos em alvenarias de tijolo e pedra. Finalizado este estudo foram traçadas as conclusões e sugeridas perspetivas futuras.
Resumo:
Este trabalho teve como objetivo avaliar e comparar os impactes ambientais da produção do butanol considerando três processos produtivos: um que usa fontes fósseis e dois que usam fontes renováveis, nomeadamente palha de trigo e milho. Para o primeiro caso considerouse o processo oxo e os restantes usaram o processo de produção ABE (acetona, butanol e etanol). Na primeira etapa estudaram-se e descreveram-se os diferentes processos referidos. A análise do ciclo de vida foi depois aplicada efetuando as quatro fases nomeadamente definição do âmbito e objetivo, inventário, avaliação de impactes e interpretação dos resultados obtidos. O inventário foi efetuado tendo em conta a bibliografia existente sobre estes processos e com o auxílio da base de dados Ecoinvent Versão3 Database™. Na avaliação de impactes utilizou-se o método Impact 2002 + (Endpoint). Concluiu-se que a produção do butanol pelo processo ABE utilizando o milho é a que apresenta maior impacte ambiental e a que produção do butanol pelo processo ABE usando a palha de trigo é a que apresenta um menor impacte ambiental, quando o processo de alocação foi efetuado tendo em conta as massas de todos os produtos produzidos em cada processo. Foi efetuada uma análise de sensibilidade para a produção de butanol usando palha de trigo e milho relativa aos dados de menor qualidade. No processo da palha de trigo fez-se variar a quantidade de material enviado para a digestão anaeróbia e a quantidade de efluente produzida. No processo relativo ao milho apenas se fez variar a quantidade de efluente produzida. As variações tiveram um efeito pouco significativo (<1,3%) no impacte global. Por fim, efetuou-se o cálculo dos impactes considerando uma alocação económica que foi executada tendo em conta os preços de venda para o ano 2013 na Europa, para os produtos produzidos pelos diferentes processos. Considerando o valor económico verificou-se um aumento do peso relativo ao butanol, o que fez aumentar significativamente o impacte ambiental. Isto deve-se em grande parte ao baixo valor económico dos gases formados nos processos de fermentação. Se na alocação por massa for retirada a massa destes gases os resultados obtidos são similares nos dois tipos de alocação.
Resumo:
The hazards associated with major accident hazard (MAH) industries are fire, explosion and toxic gas releases. Of these, toxic gas release is the worst as it has the potential to cause extensive fatalities. Qualitative and quantitative hazard analyses are essential for the identitication and quantification of the hazards associated with chemical industries. This research work presents the results of a consequence analysis carried out to assess the damage potential of the hazardous material storages in an industrial area of central Kerala, India. A survey carried out in the major accident hazard (MAH) units in the industrial belt revealed that the major hazardous chemicals stored by the various industrial units are ammonia, chlorine, benzene, naphtha, cyclohexane, cyclohexanone and LPG. The damage potential of the above chemicals is assessed using consequence modelling. Modelling of pool fires for naphtha, cyclohexane, cyclohexanone, benzene and ammonia are carried out using TNO model. Vapor cloud explosion (VCE) modelling of LPG, cyclohexane and benzene are carried out using TNT equivalent model. Boiling liquid expanding vapor explosion (BLEVE) modelling of LPG is also carried out. Dispersion modelling of toxic chemicals like chlorine, ammonia and benzene is carried out using the ALOHA air quality model. Threat zones for different hazardous storages are estimated based on the consequence modelling. The distance covered by the threat zone was found to be maximum for chlorine release from a chlor-alkali industry located in the area. The results of consequence modelling are useful for the estimation of individual risk and societal risk in the above industrial area.Vulnerability assessment is carried out using probit functions for toxic, thermal and pressure loads. Individual and societal risks are also estimated at different locations. Mapping of threat zones due to different incident outcome cases from different MAH industries is done with the help of Are GIS.Fault Tree Analysis (FTA) is an established technique for hazard evaluation. This technique has the advantage of being both qualitative and quantitative, if the probabilities and frequencies of the basic events are known. However it is often difficult to estimate precisely the failure probability of the components due to insufficient data or vague characteristics of the basic event. It has been reported that availability of the failure probability data pertaining to local conditions is surprisingly limited in India. This thesis outlines the generation of failure probability values of the basic events that lead to the release of chlorine from the storage and filling facility of a major chlor-alkali industry located in the area using expert elicitation and proven fuzzy logic. Sensitivity analysis has been done to evaluate the percentage contribution of each basic event that could lead to chlorine release. Two dimensional fuzzy fault tree analysis (TDFFTA) has been proposed for balancing the hesitation factor invo1ved in expert elicitation .
Resumo:
La optimización y armonización son factores clave para tener un buen desempeño en la industria química. BASF ha desarrollado un proyecto llamada acelerador. El objetivo de este proyecto ha sido la armonización y la integración de los procesos de la cadena de suministro a nivel mundial. El proceso básico de manejo de inventarios se quedó fuera del proyecto y debía ser analizado. El departamento de manejo de inventarios en BASF SE ha estado desarrollando su propia estrategia para la definición de procesos globales de manufactura. En este trabajo se presentará un informe de las fases de la formulación de la estrategia y establecer algunas pautas para la fase de implementación que está teniendo lugar en 2012 y 2013.
Resumo:
Background: Genetic and epigenetic factors interacting with the environment over time are the main causes of complex diseases such as autoimmune diseases (ADs). Among the environmental factors are organic solvents (OSs), which are chemical compounds used routinely in commercial industries. Since controversy exists over whether ADs are caused by OSs, a systematic review and meta-analysis were performed to assess the association between OSs and ADs. Methods and Findings: The systematic search was done in the PubMed, SCOPUS, SciELO and LILACS databases up to February 2012. Any type of study that used accepted classification criteria for ADs and had information about exposure to OSs was selected. Out of a total of 103 articles retrieved, 33 were finally included in the meta-analysis. The final odds ratios (ORs) and 95% confidence intervals (CIs) were obtained by the random effect model. A sensitivity analysis confirmed results were not sensitive to restrictions on the data included. Publication bias was trivial. Exposure to OSs was associated to systemic sclerosis, primary systemic vasculitis and multiple sclerosis individually and also to all the ADs evaluated and taken together as a single trait (OR: 1.54; 95% CI: 1.25-1.92; p-value, 0.001). Conclusion: Exposure to OSs is a risk factor for developing ADs. As a corollary, individuals with non-modifiable risk factors (i.e., familial autoimmunity or carrying genetic factors) should avoid any exposure to OSs in order to avoid increasing their risk of ADs.
Resumo:
This paper analyzes the measure of systemic importance ∆CoV aR proposed by Adrian and Brunnermeier (2009, 2010) within the context of a similar class of risk measures used in the risk management literature. In addition, we develop a series of testing procedures, based on ∆CoV aR, to identify and rank the systemically important institutions. We stress the importance of statistical testing in interpreting the measure of systemic importance. An empirical application illustrates the testing procedures, using equity data for three European banks.
Resumo:
La implementació de la Directiva Europea 91/271/CEE referent a tractament d'aigües residuals urbanes va promoure la construcció de noves instal·lacions al mateix temps que la introducció de noves tecnologies per tractar nutrients en àrees designades com a sensibles. Tant el disseny d'aquestes noves infraestructures com el redisseny de les ja existents es va portar a terme a partir d'aproximacions basades fonamentalment en objectius econòmics degut a la necessitat d'acabar les obres en un període de temps relativament curt. Aquests estudis estaven basats en coneixement heurístic o correlacions numèriques provinents de models determinístics simplificats. Així doncs, moltes de les estacions depuradores d'aigües residuals (EDARs) resultants van estar caracteritzades per una manca de robustesa i flexibilitat, poca controlabilitat, amb freqüents problemes microbiològics de separació de sòlids en el decantador secundari, elevats costos d'operació i eliminació parcial de nutrients allunyant-les de l'òptim de funcionament. Molts d'aquestes problemes van sorgir degut a un disseny inadequat, de manera que la comunitat científica es va adonar de la importància de les etapes inicials de disseny conceptual. Precisament per aquesta raó, els mètodes tradicionals de disseny han d'evolucionar cap a sistemes d'avaluació mes complexos, que tinguin en compte múltiples objectius, assegurant així un millor funcionament de la planta. Tot i la importància del disseny conceptual tenint en compte múltiples objectius, encara hi ha un buit important en la literatura científica tractant aquest camp d'investigació. L'objectiu que persegueix aquesta tesi és el de desenvolupar un mètode de disseny conceptual d'EDARs considerant múltiples objectius, de manera que serveixi d'eina de suport a la presa de decisions al seleccionar la millor alternativa entre diferents opcions de disseny. Aquest treball de recerca contribueix amb un mètode de disseny modular i evolutiu que combina diferent tècniques com: el procés de decisió jeràrquic, anàlisi multicriteri, optimació preliminar multiobjectiu basada en anàlisi de sensibilitat, tècniques d'extracció de coneixement i mineria de dades, anàlisi multivariant i anàlisi d'incertesa a partir de simulacions de Monte Carlo. Això s'ha aconseguit subdividint el mètode de disseny desenvolupat en aquesta tesis en quatre blocs principals: (1) generació jeràrquica i anàlisi multicriteri d'alternatives, (2) anàlisi de decisions crítiques, (3) anàlisi multivariant i (4) anàlisi d'incertesa. El primer dels blocs combina un procés de decisió jeràrquic amb anàlisi multicriteri. El procés de decisió jeràrquic subdivideix el disseny conceptual en una sèrie de qüestions mes fàcilment analitzables i avaluables mentre que l'anàlisi multicriteri permet la consideració de diferent objectius al mateix temps. D'aquesta manera es redueix el nombre d'alternatives a avaluar i fa que el futur disseny i operació de la planta estigui influenciat per aspectes ambientals, econòmics, tècnics i legals. Finalment aquest bloc inclou una anàlisi de sensibilitat dels pesos que proporciona informació de com varien les diferents alternatives al mateix temps que canvia la importància relativa del objectius de disseny. El segon bloc engloba tècniques d'anàlisi de sensibilitat, optimització preliminar multiobjectiu i extracció de coneixement per donar suport al disseny conceptual d'EDAR, seleccionant la millor alternativa un cop s'han identificat decisions crítiques. Les decisions crítiques són aquelles en les que s'ha de seleccionar entre alternatives que compleixen de forma similar els objectius de disseny però amb diferents implicacions pel que respecte a la futura estructura i operació de la planta. Aquest tipus d'anàlisi proporciona una visió més àmplia de l'espai de disseny i permet identificar direccions desitjables (o indesitjables) cap on el procés de disseny pot derivar. El tercer bloc de la tesi proporciona l'anàlisi multivariant de les matrius multicriteri obtingudes durant l'avaluació de les alternatives de disseny. Específicament, les tècniques utilitzades en aquest treball de recerca engloben: 1) anàlisi de conglomerats, 2) anàlisi de components principals/anàlisi factorial i 3) anàlisi discriminant. Com a resultat és possible un millor accés a les dades per realitzar la selecció de les alternatives, proporcionant més informació per a una avaluació mes efectiva, i finalment incrementant el coneixement del procés d'avaluació de les alternatives de disseny generades. En el quart i últim bloc desenvolupat en aquesta tesi, les diferents alternatives de disseny són avaluades amb incertesa. L'objectiu d'aquest bloc és el d'estudiar el canvi en la presa de decisions quan una alternativa és avaluada incloent o no incertesa en els paràmetres dels models que descriuen el seu comportament. La incertesa en el paràmetres del model s'introdueix a partir de funcions de probabilitat. Desprès es porten a terme simulacions Monte Carlo, on d'aquestes distribucions se n'extrauen números aleatoris que es subsisteixen pels paràmetres del model i permeten estudiar com la incertesa es propaga a través del model. Així és possible analitzar la variació en l'acompliment global dels objectius de disseny per a cada una de les alternatives, quines són les contribucions en aquesta variació que hi tenen els aspectes ambientals, legals, econòmics i tècnics, i finalment el canvi en la selecció d'alternatives quan hi ha una variació de la importància relativa dels objectius de disseny. En comparació amb les aproximacions tradicionals de disseny, el mètode desenvolupat en aquesta tesi adreça problemes de disseny/redisseny tenint en compte múltiples objectius i múltiples criteris. Al mateix temps, el procés de presa de decisions mostra de forma objectiva, transparent i sistemàtica el perquè una alternativa és seleccionada en front de les altres, proporcionant l'opció que més bé acompleix els objectius marcats, mostrant els punts forts i febles, les principals correlacions entre objectius i alternatives, i finalment tenint en compte la possible incertesa inherent en els paràmetres del model que es fan servir durant les anàlisis. Les possibilitats del mètode desenvolupat es demostren en aquesta tesi a partir de diferents casos d'estudi: selecció del tipus d'eliminació biològica de nitrogen (cas d'estudi # 1), optimització d'una estratègia de control (cas d'estudi # 2), redisseny d'una planta per aconseguir eliminació simultània de carboni, nitrogen i fòsfor (cas d'estudi # 3) i finalment anàlisi d'estratègies control a nivell de planta (casos d'estudi # 4 i # 5).
Resumo:
A new dynamic model of water quality, Q(2), has recently been developed, capable of simulating large branched river systems. This paper describes the application of a generalized sensitivity analysis (GSA) to Q(2) for single reaches of the River Thames in southern England. Focusing on the simulation of dissolved oxygen (DO) (since this may be regarded as a proxy for the overall health of a river); the GSA is used to identify key parameters controlling model behavior and provide a probabilistic procedure for model calibration. It is shown that, in the River Thames at least, it is more important to obtain high quality forcing functions than to obtain improved parameter estimates once approximate values have been estimated. Furthermore, there is a need to ensure reasonable simulation of a range of water quality determinands, since a focus only on DO increases predictive uncertainty in the DO simulations. The Q(2) model has been applied here to the River Thames, but it has a broad utility for evaluating other systems in Europe and around the world.
Resumo:
The evaluation of life cycle greenhouse gas emissions from power generation with carbon capture and storage (CCS) is a critical factor in energy and policy analysis. The current paper examines life cycle emissions from three types of fossil-fuel-based power plants, namely supercritical pulverized coal (super-PC), natural gas combined cycle (NGCC) and integrated gasification combined cycle (IGCC), with and without CCS. Results show that, for a 90% CO2 capture efficiency, life cycle GHG emissions are reduced by 75-84% depending on what technology is used. With GHG emissions less than 170 g/kWh, IGCC technology is found to be favorable to NGCC with CCS. Sensitivity analysis reveals that, for coal power plants, varying the CO2 capture efficiency and the coal transport distance has a more pronounced effect on life cycle GHG emissions than changing the length of CO2 transport pipeline. Finally, it is concluded from the current study that while the global warming potential is reduced when MEA-based CO2 capture is employed, the increase in other air pollutants such as NOx and NH3 leads to higher eutrophication and acidification potentials.
Resumo:
Most building services products are installed while a building is constructed, but they are not operated until the building is commissioned. The warranty of the products may cover the time starting from their installation to the end of the warranty period. Prior to the commissioning of the building, the products are at a dormant mode (i.e., not operated) but protected by the warranty. For such products, both the usage intensity and the failure patterns are different from those with continuous usage intensity and failure patterns. This paper develops warranty cost models for repairable products with a dormant mode from both the manufacturer's and buyer's perspectives. Relationships between the failure patterns at the dormant mode and at the operational mode are also discussed. Numerical examples and sensitivity analysis are used to demonstrate the applicability of the methodology derived in the paper.
Resumo:
Reliably representing both horizontal cloud inhomogeneity and vertical cloud overlap is fundamentally important for the radiation budget of a general circulation model. Here, we build on the work of Part One of this two-part paper by applying a pair of parameterisations that account for horizontal inhomogeneity and vertical overlap to global re-analysis data. These are applied both together and separately in an attempt to quantify the effects of poor representation of the two components on radiation budget. Horizontal inhomogeneity is accounted for using the “Tripleclouds” scheme, which uses two regions of cloud in each layer of a gridbox as opposed to one; vertical overlap is accounted for using “exponential-random” overlap, which aligns vertically continuous cloud according to a decorrelation height. These are applied to a sample of scenes from a year of ERA-40 data. The largest radiative effect of horizontal inhomogeneity is found to be in areas of marine stratocumulus; the effect of vertical overlap is found to be fairly uniform, but with larger individual short-wave and long-wave effects in areas of deep, tropical convection. The combined effect of the two parameterisations is found to reduce the magnitude of the net top-of-atmosphere cloud radiative forcing (CRF) by 2.25 W m−2, with shifts of up to 10 W m−2 in areas of marine stratocumulus. The effects of the uncertainty in our parameterisations on radiation budget is also investigated. It is found that the uncertainty in the impact of horizontal inhomogeneity is of order ±60%, while the uncertainty in the impact of vertical overlap is much smaller. This suggests an insensitivity of the radiation budget to the exact nature of the global decorrelation height distribution derived in Part One.
Resumo:
In this study, we systematically compare a wide range of observational and numerical precipitation datasets for Central Asia. Data considered include two re-analyses, three datasets based on direct observations, and the output of a regional climate model simulation driven by a global re-analysis. These are validated and intercompared with respect to their ability to represent the Central Asian precipitation climate. In each of the datasets, we consider the mean spatial distribution and the seasonal cycle of precipitation, the amplitude of interannual variability, the representation of individual yearly anomalies, the precipitation sensitivity (i.e. the response to wet and dry conditions), and the temporal homogeneity of precipitation. Additionally, we carried out part of these analyses for datasets available in real time. The mutual agreement between the observations is used as an indication of how far these data can be used for validating precipitation data from other sources. In particular, we show that the observations usually agree qualitatively on anomalies in individual years while it is not always possible to use them for the quantitative validation of the amplitude of interannual variability. The regional climate model is capable of improving the spatial distribution of precipitation. At the same time, it strongly underestimates summer precipitation and its variability, while interannual variations are well represented during the other seasons, in particular in the Central Asian mountains during winter and spring
Resumo:
[1] During the Northern Hemisphere summer, absorbed solar radiation melts snow and the upper surface of Arctic sea ice to generate meltwater that accumulates in ponds. The melt ponds reduce the albedo of the sea ice cover during the melting season, with a significant impact on the heat and mass budget of the sea ice and the upper ocean. We have developed a model, designed to be suitable for inclusion into a global circulation model (GCM), which simulates the formation and evolution of the melt pond cover. In order to be compatible with existing GCM sea ice models, our melt pond model builds upon the existing theory of the evolution of the sea ice thickness distribution. Since this theory does not describe the topography of the ice cover, which is crucial to determining the location, extent, and depth of individual ponds, we have needed to introduce some assumptions. We describe our model, present calculations and a sensitivity analysis, and discuss our results.