962 resultados para MONTE-CARLO SIMULATION
Resumo:
Quantitative Structure-Activity Relationship (QSAR) has been applied extensively in predicting toxicity of Disinfection By-Products (DBPs) in drinking water. Among many toxicological properties, acute and chronic toxicities of DBPs have been widely used in health risk assessment of DBPs. These toxicities are correlated with molecular properties, which are usually correlated with molecular descriptors. The primary goals of this thesis are: 1) to investigate the effects of molecular descriptors (e.g., chlorine number) on molecular properties such as energy of the lowest unoccupied molecular orbital (ELUMO) via QSAR modelling and analysis; 2) to validate the models by using internal and external cross-validation techniques; 3) to quantify the model uncertainties through Taylor and Monte Carlo Simulation. One of the very important ways to predict molecular properties such as ELUMO is using QSAR analysis. In this study, number of chlorine (NCl) and number of carbon (NC) as well as energy of the highest occupied molecular orbital (EHOMO) are used as molecular descriptors. There are typically three approaches used in QSAR model development: 1) Linear or Multi-linear Regression (MLR); 2) Partial Least Squares (PLS); and 3) Principle Component Regression (PCR). In QSAR analysis, a very critical step is model validation after QSAR models are established and before applying them to toxicity prediction. The DBPs to be studied include five chemical classes: chlorinated alkanes, alkenes, and aromatics. In addition, validated QSARs are developed to describe the toxicity of selected groups (i.e., chloro-alkane and aromatic compounds with a nitro- or cyano group) of DBP chemicals to three types of organisms (e.g., Fish, T. pyriformis, and P.pyosphoreum) based on experimental toxicity data from the literature. The results show that: 1) QSAR models to predict molecular property built by MLR, PLS or PCR can be used either to select valid data points or to eliminate outliers; 2) The Leave-One-Out Cross-Validation procedure by itself is not enough to give a reliable representation of the predictive ability of the QSAR models, however, Leave-Many-Out/K-fold cross-validation and external validation can be applied together to achieve more reliable results; 3) ELUMO are shown to correlate highly with the NCl for several classes of DBPs; and 4) According to uncertainty analysis using Taylor method, the uncertainty of QSAR models is contributed mostly from NCl for all DBP classes.
Resumo:
Prior research has established that idiosyncratic volatility of the securities prices exhibits a positive trend. This trend and other factors have made the merits of investment diversification and portfolio construction more compelling. A new optimization technique, a greedy algorithm, is proposed to optimize the weights of assets in a portfolio. The main benefits of using this algorithm are to: a) increase the efficiency of the portfolio optimization process, b) implement large-scale optimizations, and c) improve the resulting optimal weights. In addition, the technique utilizes a novel approach in the construction of a time-varying covariance matrix. This involves the application of a modified integrated dynamic conditional correlation GARCH (IDCC - GARCH) model to account for the dynamics of the conditional covariance matrices that are employed. The stochastic aspects of the expected return of the securities are integrated into the technique through Monte Carlo simulations. Instead of representing the expected returns as deterministic values, they are assigned simulated values based on their historical measures. The time-series of the securities are fitted into a probability distribution that matches the time-series characteristics using the Anderson-Darling goodness-of-fit criterion. Simulated and actual data sets are used to further generalize the results. Employing the S&P500 securities as the base, 2000 simulated data sets are created using Monte Carlo simulation. In addition, the Russell 1000 securities are used to generate 50 sample data sets. The results indicate an increase in risk-return performance. Choosing the Value-at-Risk (VaR) as the criterion and the Crystal Ball portfolio optimizer, a commercial product currently available on the market, as the comparison for benchmarking, the new greedy technique clearly outperforms others using a sample of the S&P500 and the Russell 1000 securities. The resulting improvements in performance are consistent among five securities selection methods (maximum, minimum, random, absolute minimum, and absolute maximum) and three covariance structures (unconditional, orthogonal GARCH, and integrated dynamic conditional GARCH).
Resumo:
Community metabolism was investigated using a Lagrangian flow respirometry technique on 2 reef flats at Moorea (French Polynesia) during austral winter and Yonge Reef (Great Barrier Reef) during austral summer. The data were used to estimate related air-sea CO2 disequilibrium. A sine function did not satisfactorily model the diel light curves and overestimated the metabolic parameters. The ranges of community gross primary production and respiration (Pg and R; 9 to 15 g C m-2 d-1) were within the range previously reported for reef flats, and community net calcification (G; 19 to 25 g CaCO3 m-2 d-1) was higher than the 'standard' range. The molar ratio of organic to inorganic carbon uptake was 6:1 for both sites. The reef flat at Moorea displayed a higher rate of organic production and a lower rate of calcification compared to previous measurements carried out during austral summer. The approximate uncertainty of the daily metabolic parameters was estimated using a procedure based on a Monte Carlo simulation. The standard errors of Pg,R and Pg/R expressed as a percentage of the mean are lower than 3% but are comparatively larger for E, the excess production (6 to 78%). The daily air-sea CO2 flux (FCO2) was positive throughout the field experiments, indicating that the reef flats at Moorea and Yonge Reef released CO2 to the atmosphere at the time of measurement. FCO2 decreased as a function of increasing daily irradiance.
Resumo:
Community metabolism was investigated using a Lagrangian flow respirometry technique on 2 reef flats at Moorea (French Polynesia) during austral winter and Yonge Reef (Great Barrier Reef) during austral summer. The data were used to estimate related air-sea CO2 disequilibrium. A sine function did not satisfactorily model the diel light curves and overestimated the metabolic parameters. The ranges of community gross primary production and respiration (Pg and R; 9 to 15 g C m-2 d-1) were within the range previously reported for reef flats, and community net calcification (G; 19 to 25 g CaCO3 m-2 d-1) was higher than the 'standard' range. The molar ratio of organic to inorganic carbon uptake was 6:1 for both sites. The reef flat at Moorea displayed a higher rate of organic production and a lower rate of calcification compared to previous measurements carried out during austral summer. The approximate uncertainty of the daily metabolic parameters was estimated using a procedure based on a Monte Carlo simulation. The standard errors of Pg,R and Pg/R expressed as a percentage of the mean are lower than 3% but are comparatively larger for E, the excess production (6 to 78%). The daily air-sea CO2 flux (FCO2) was positive throughout the field experiments, indicating that the reef flats at Moorea and Yonge Reef released CO2 to the atmosphere at the time of measurement. FCO2 decreased as a function of increasing daily irradiance.
Resumo:
The control of radioactive backgrounds will be key in the search for neutrinoless double beta decay at the SNO+ experiment. Several aspects of the SNO+ back- grounds have been studied. The SNO+ tellurium purification process may require ultra low background ethanol as a reagent. A low background assay technique for ethanol was developed and used to identify a source of ethanol with measured 238U and 232Th concentrations below 2.8 10^-13 g/g and 10^-14 g/g respectively. It was also determined that at least 99:997% of the ethanol can be removed from the purified tellurium using forced air ow in order to reduce 14C contamination. In addition, a quality-control technique using an oxygen sensor was studied to monitor 222Rn contamination due to air leaking into the SNO+ scintillator during transport. The expected sensitivity of the technique is 0.1mBq/L or better depending on the oxygen sensor used. Finally, the dependence of SNO+ neutrinoless double beta decay sensitivity on internal background levels was studied using Monte Carlo simulation. The half-life limit to neutrinoless double beta decay of 130Te after 3 years of operation was found to be 4.8 1025 years under default conditions.
Resumo:
Este estudio presenta la validación de las observaciones que realizó el programa de observación pesquera llamado Programa Bitácoras de Pesca (PBP) durante el periodo 2005 - 2011 en el área de distribución donde operan las embarcaciones industriales de cerco dedicadas a la pesca del stock norte-centro de la anchoveta peruana (Engraulis ringens). Además, durante ese mismo periodo y área de distribución, se estimó la magnitud del descarte por exceso de captura, descarte de juveniles y la captura incidental de dicha pesquera. Se observaron 3 768 viajes de un total de 302 859, representando un porcentaje de 1.2 %. Los datos del descarte por exceso de captura, descarte de juveniles y captura incidental registrados en los viajes observados, se caracterizaron por presentar un alta proporción de ceros. Para la validación de las observaciones, se realizó un estudio de simulación basado en la metodología de Monte Carlo usando un modelo de distribución binomial negativo. Esta permite inferir sobre el nivel de cobertura óptima y conocer si la información obtenida en el programa de observación es contable. De este análisis, se concluye que los niveles de observación actual se deberían incrementar hasta tener un nivel de cobertura de al menos el 10% del total de viajes que realicen en el año las embarcaciones industriales de cerco dedicadas a la pesca del stock norte-centro de la anchoveta peruana. La estimación del descarte por exceso de captura, descarte de juveniles y captura incidental se realizó mediante tres metodologías: Bootstrap, Modelo General Lineal (GLM) y Modelo Delta. Cada metodología estimó distintas magnitudes con tendencias similares. Las magnitudes estimadas fueron comparadas usando un ANOVA Bayesiano, la cual muestra que hubo escasa evidencia que las magnitudes estimadas del descarte por exceso de captura por metodología sean diferentes, lo mismo se presentó para el caso de la captura incidental, mientras que para el descarte de juveniles mostró que hubieron diferencias sustanciales de ser diferentes. La metodología que cumplió los supuestos y explico la mayor variabilidad de las variables modeladas fue el Modelo Delta, el cual parece ser una mejor alternativa para la estimación, debido a la alta proporción de ceros en los datos. Las estimaciones promedio del descarte por exceso de captura, descarte de juveniles y captura incidental aplicando el Modelo Delta, fueron 252 580, 41 772, 44 823 toneladas respectivamente, que en conjunto representaron el 5.74% de los desembarques. Además, con la magnitud de la estimación del descarte de juveniles, se realizó un ejercicio de proyección de biomasa bajo el escenario hipotético de no mortalidad por pesca y que los individuos juveniles descartados sólo presentaron tallas de 8 y 11 cm., en la cual se obtuvo que la biomasa que no estará disponible a la pesca está entre los 52 mil y 93 mil toneladas.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Thesis (Master's)--University of Washington, 2016-08
Resumo:
The financial crisis of 2007-2008 led to extraordinary government intervention in firms and markets. The scope and depth of government action rivaled that of the Great Depression. Many traded markets experienced dramatic declines in liquidity leading to the existence of conditions normally assumed to be promptly removed via the actions of profit seeking arbitrageurs. These extreme events motivate the three essays in this work. The first essay seeks and fails to find evidence of investor behavior consistent with the broad 'Too Big To Fail' policies enacted during the crisis by government agents. Only in limited circumstances, where government guarantees such as deposit insurance or U.S. Treasury lending lines already existed, did investors impart a premium to the debt security prices of firms under stress. The second essay introduces the Inflation Indexed Swap Basis (IIS Basis) in examining the large differences between cash and derivative markets based upon future U.S. inflation as measured by the Consumer Price Index (CPI). It reports the consistent positive value of this measure as well as the very large positive values it reached in the fourth quarter of 2008 after Lehman Brothers went bankrupt. It concludes that the IIS Basis continues to exist due to limitations in market liquidity and hedging alternatives. The third essay explores the methodology of performing debt based event studies utilizing credit default swaps (CDS). It provides practical implementation advice to researchers to address limited source data and/or small target firm sample size.
Resumo:
The occurrence frequency of failure events serve as critical indexes representing the safety status of dam-reservoir systems. Although overtopping is the most common failure mode with significant consequences, this type of event, in most cases, has a small probability. Estimation of such rare event risks for dam-reservoir systems with crude Monte Carlo (CMC) simulation techniques requires a prohibitively large number of trials, where significant computational resources are required to reach the satisfied estimation results. Otherwise, estimation of the disturbances would not be accurate enough. In order to reduce the computation expenses and improve the risk estimation efficiency, an importance sampling (IS) based simulation approach is proposed in this dissertation to address the overtopping risks of dam-reservoir systems. Deliverables of this study mainly include the following five aspects: 1) the reservoir inflow hydrograph model; 2) the dam-reservoir system operation model; 3) the CMC simulation framework; 4) the IS-based Monte Carlo (ISMC) simulation framework; and 5) the overtopping risk estimation comparison of both CMC and ISMC simulation. In a broader sense, this study meets the following three expectations: 1) to address the natural stochastic characteristics of the dam-reservoir system, such as the reservoir inflow rate; 2) to build up the fundamental CMC and ISMC simulation frameworks of the dam-reservoir system in order to estimate the overtopping risks; and 3) to compare the simulation results and the computational performance in order to demonstrate the ISMC simulation advantages. The estimation results of overtopping probability could be used to guide the future dam safety investigations and studies, and to supplement the conventional analyses in decision making on the dam-reservoir system improvements. At the same time, the proposed methodology of ISMC simulation is reasonably robust and proved to improve the overtopping risk estimation. The more accurate estimation, the smaller variance, and the reduced CPU time, expand the application of Monte Carlo (MC) technique on evaluating rare event risks for infrastructures.
Resumo:
Recent developments in automation, robotics and artificial intelligence have given a push to a wider usage of these technologies in recent years, and nowadays, driverless transport systems are already state-of-the-art on certain legs of transportation. This has given a push for the maritime industry to join the advancement. The case organisation, AAWA initiative, is a joint industry-academia research consortium with the objective of developing readiness for the first commercial autonomous solutions, exploiting state-of-the-art autonomous and remote technology. The initiative develops both autonomous and remote operation technology for navigation, machinery, and all on-board operating systems. The aim of this study is to develop a model with which to estimate and forecast the operational costs, and thus enable comparisons between manned and autonomous cargo vessels. The building process of the model is also described and discussed. Furthermore, the model’s aim is to track and identify the critical success factors of the chosen ship design, and to enable monitoring and tracking of the incurred operational costs as the life cycle of the vessel progresses. The study adopts the constructive research approach, as the aim is to develop a construct to meet the needs of a case organisation. Data has been collected through discussions and meeting with consortium members and researchers, as well as through written and internal communications material. The model itself is built using activity-based life cycle costing, which enables both realistic cost estimation and forecasting, as well as the identification of critical success factors due to the process-orientation adopted from activity-based costing and the statistical nature of Monte Carlo simulation techniques. As the model was able to meet the multiple aims set for it, and the case organisation was satisfied with it, it could be argued that activity-based life cycle costing is the method with which to conduct cost estimation and forecasting in the case of autonomous cargo vessels. The model was able to perform the cost analysis and forecasting, as well as to trace the critical success factors. Later on, it also enabled, albeit hypothetically, monitoring and tracking of the incurred costs. By collecting costs this way, it was argued that the activity-based LCC model is able facilitate learning from and continuous improvement of the autonomous vessel. As with the building process of the model, an individual approach was chosen, while still using the implementation and model building steps presented in existing literature. This was due to two factors: the nature of the model and – perhaps even more importantly – the nature of the case organisation. Furthermore, the loosely organised network structure means that knowing the case organisation and its aims is of great importance when conducting a constructive research.
Resumo:
As rápidas alterações sociais, económicas, culturais e ambientais determinaram mudanças significativas nos estilos de vida e contribuíram para o crescimento e generalização do consumo de alimentos e refeições fora de casa. Portugal acompanha a tendência de aumento do consumo alimentar fora de casa, assim, as refeições fora de casa, que há uns anos eram um acontecimento fortuito, são hoje uma prática habitual das famílias portuguesas, não só durante a semana de trabalho, mas também nos fins-de-semana. As, visitas aos centros comerciais que se tornaram um hábito no nosso país incluem uma paragem nas Praças de Alimentação, espaços de excelência pela diversidade alimentar onde predominam as refeições de fast-food. Porém é fundamental a escolha adequada/equilibrada dos alimentos que se vão consumir. O presente trabalho procurou avaliar os hábitos e percepção dos consumidores de refeições rápidas com base numa ementa específica cujo alimento principal é o pão. Posteriormente e de acordo com as preferências de consumo procedeu-se à avaliação nutricional das escolhas. Neste estudo participaram 150 indivíduos que frequentaram as instalações de um restaurante de comida rápida situada na praça de alimentação de um centro comercial situado em Viseu. Foi aplicado um questionário de autopreenchimento, por nós elaborado dividido em 4 partes: caracterização sociodemográfica; hábitos de consumo dos inquiridos; produtos escolhidos pelos inquiridos; grau de satisfação face aos produtos escolhidos. As análises estatísticas foram efectuadas com recurso ao Programa informático Statistical Package for the Social Sciences - SPSS® for Windows, versão 22. Realizam-se testes de Qui-quadrado com simulação de Monte Carlo, considerando o nível de significância de 0,05. Com base nas escolhas mais frequentes feitas pelos inquiridos procedeu-se à avaliação nutricional dos menus recorrendo ao programa DIAL 1.19 versão 1 e quando não se encontrou informação neste utilizou-se a tabela de composição de alimentos portugueses on line (INSA, 2010). Compararam-se os valores obtidos para o Valor Calórico Total, os macronutrientes, a fibra, o colesterol e o sódio com as Doses Diárias Recomendadas. A amostra era composta por 68,7% mulheres e 31,3% homens, com uma média de idades de 29,9 ± 3 anos e, maioritariamente empregados (64,7%). O grau de instrução da maioria dos inquiridos (54,7%) era o ensino superior. Grande parte da amostra não se considera consumidora habitual de fast-food,referindo ainda efectuar frequentemente uma alimentação equilibrada. Sendo que apenas 5 % frequenta as instalações mais de uma vez por semana. De entre os produtos disponíveis, a preferência fez-se pela sandes e batata-frita, sendo o momento de maior consumo o almoçoA avaliação nutricional das escolhas preferenciais dos inquiridos mostrou que o VCT do menu que inclui água como bebida está dentro dos limites calóricos preconizados para o almoço excepção feita ao menu que inclui sandes quente de frango em pão de orégãos e sandes fria de queijo fresco que se destacam por apresentar um valor inferior ao limite mínimo recomendado. Pelo contrário, a inclusão no menu do refrigerante faz com que haja um aumento do VCT, independentemente da sandes considerada, em 18%. Uma análise detalhada mostra que estas ementas são desequilibradas, apresentando 33,3% delas valores de proteínas superiores à DDR enquanto que os valores de HC e lípidos se encontram maioritariamente dentro dos limites havendo apenas 13,3% das ementas fora desses valores. Relativamente ao aporte de fibra e de sódio 86,7% das ementas aparecem desenquadradas com valores excessivos de sódio e valores de fibra 33% abaixo do limite mínimo recomendado. Tratando-se de um estudo de caso em que apenas se inclui um único restaurante de uma praça de alimentação, que fornece ementas à base de pão (sandes) os resultados são interpretados de forma cautelosa e sem generalização. Podemos no entanto concluir, face aos resultados obtidos a necessidade de redução do teor de sal das ementas. Para além disso parece-nos fundamental, para que o consumidor possa comparar opções alimentares e tomar decisões informadas, a disponibilização da informação nutricional das ementas propostas.
Resumo:
Les modèles incrémentaux sont des modèles statistiques qui ont été développés initialement dans le domaine du marketing. Ils sont composés de deux groupes, un groupe contrôle et un groupe traitement, tous deux comparés par rapport à une variable réponse binaire (le choix de réponses est « oui » ou « non »). Ces modèles ont pour but de détecter l’effet du traitement sur les individus à l’étude. Ces individus n’étant pas tous des clients, nous les appellerons : « prospects ». Cet effet peut être négatif, nul ou positif selon les caractéristiques des individus composants les différents groupes. Ce mémoire a pour objectif de comparer des modèles incrémentaux d’un point de vue bayésien et d’un point de vue fréquentiste. Les modèles incrémentaux utilisés en pratique sont ceux de Lo (2002) et de Lai (2004). Ils sont initialement réalisés d’un point de vue fréquentiste. Ainsi, dans ce mémoire, l’approche bayésienne est utilisée et comparée à l’approche fréquentiste. Les simulations sont e ectuées sur des données générées avec des régressions logistiques. Puis, les paramètres de ces régressions sont estimés avec des simulations Monte-Carlo dans l’approche bayésienne et comparés à ceux obtenus dans l’approche fréquentiste. L’estimation des paramètres a une influence directe sur la capacité du modèle à bien prédire l’effet du traitement sur les individus. Nous considérons l’utilisation de trois lois a priori pour l’estimation des paramètres de façon bayésienne. Elles sont choisies de manière à ce que les lois a priori soient non informatives. Les trois lois utilisées sont les suivantes : la loi bêta transformée, la loi Cauchy et la loi normale. Au cours de l’étude, nous remarquerons que les méthodes bayésiennes ont un réel impact positif sur le ciblage des individus composant les échantillons de petite taille.
Resumo:
The anticipated growth of air traffic worldwide requires enhanced Air Traffic Management (ATM) technologies and procedures to increase the system capacity, efficiency, and resilience, while reducing environmental impact and maintaining operational safety. To deal with these challenges, new automation and information exchange capabilities are being developed through different modernisation initiatives toward a new global operational concept called Trajectory Based Operations (TBO), in which aircraft trajectory information becomes the cornerstone of advanced ATM applications. This transformation will lead to higher levels of system complexity requiring enhanced Decision Support Tools (DST) to aid humans in the decision making processes. These will rely on accurate predicted aircraft trajectories, provided by advanced Trajectory Predictors (TP). The trajectory prediction process is subject to stochastic effects that introduce uncertainty into the predictions. Regardless of the assumptions that define the aircraft motion model underpinning the TP, deviations between predicted and actual trajectories are unavoidable. This thesis proposes an innovative method to characterise the uncertainty associated with a trajectory prediction based on the mathematical theory of Polynomial Chaos Expansions (PCE). Assuming univariate PCEs of the trajectory prediction inputs, the method describes how to generate multivariate PCEs of the prediction outputs that quantify their associated uncertainty. Arbitrary PCE (aPCE) was chosen because it allows a higher degree of flexibility to model input uncertainty. The obtained polynomial description can be used in subsequent prediction sensitivity analyses thanks to the relationship between polynomial coefficients and Sobol indices. The Sobol indices enable ranking the input parameters according to their influence on trajectory prediction uncertainty. The applicability of the aPCE-based uncertainty quantification detailed herein is analysed through a study case. This study case represents a typical aircraft trajectory prediction problem in ATM, in which uncertain parameters regarding aircraft performance, aircraft intent description, weather forecast, and initial conditions are considered simultaneously. Numerical results are compared to those obtained from a Monte Carlo simulation, demonstrating the advantages of the proposed method. The thesis includes two examples of DSTs (Demand and Capacity Balancing tool, and Arrival Manager) to illustrate the potential benefits of exploiting the proposed uncertainty quantification method.