4 resultados para computational cost
em AMS Tesi di Laurea - Alm@DL - Università di Bologna
Resumo:
One of the biggest challenges that contaminant hydrogeology is facing, is how to adequately address the uncertainty associated with model predictions. Uncertainty arise from multiple sources, such as: interpretative error, calibration accuracy, parameter sensitivity and variability. This critical issue needs to be properly addressed in order to support environmental decision-making processes. In this study, we perform Global Sensitivity Analysis (GSA) on a contaminant transport model for the assessment of hydrocarbon concentration in groundwater. We provide a quantification of the environmental impact and, given the incomplete knowledge of hydrogeological parameters, we evaluate which are the most influential, requiring greater accuracy in the calibration process. Parameters are treated as random variables and a variance-based GSA is performed in a optimized numerical Monte Carlo framework. The Sobol indices are adopted as sensitivity measures and they are computed by employing meta-models to characterize the migration process, while reducing the computational cost of the analysis. The proposed methodology allows us to: extend the number of Monte Carlo iterations, identify the influence of uncertain parameters and lead to considerable saving computational time obtaining an acceptable accuracy.
Resumo:
The dissertation starts by providing a description of the phenomena related to the increasing importance recently acquired by satellite applications. The spread of such technology comes with implications, such as an increase in maintenance cost, from which derives the interest in developing advanced techniques that favor an augmented autonomy of spacecrafts in health monitoring. Machine learning techniques are widely employed to lay a foundation for effective systems specialized in fault detection by examining telemetry data. Telemetry consists of a considerable amount of information; therefore, the adopted algorithms must be able to handle multivariate data while facing the limitations imposed by on-board hardware features. In the framework of outlier detection, the dissertation addresses the topic of unsupervised machine learning methods. In the unsupervised scenario, lack of prior knowledge of the data behavior is assumed. In the specific, two models are brought to attention, namely Local Outlier Factor and One-Class Support Vector Machines. Their performances are compared in terms of both the achieved prediction accuracy and the equivalent computational cost. Both models are trained and tested upon the same sets of time series data in a variety of settings, finalized at gaining insights on the effect of the increase in dimensionality. The obtained results allow to claim that both models, combined with a proper tuning of their characteristic parameters, successfully comply with the role of outlier detectors in multivariate time series data. Nevertheless, under this specific context, Local Outlier Factor results to be outperforming One-Class SVM, in that it proves to be more stable over a wider range of input parameter values. This property is especially valuable in unsupervised learning since it suggests that the model is keen to adapting to unforeseen patterns.
Resumo:
Deep Learning architectures give brilliant results in a large variety of fields, but a comprehensive theoretical description of their inner functioning is still lacking. In this work, we try to understand the behavior of neural networks by modelling in the frameworks of Thermodynamics and Condensed Matter Physics. We approach neural networks as in a real laboratory and we measure the frequency spectrum and the entropy of the weights of the trained model. The stochasticity of the training occupies a central role in the dynamics of the weights and makes it difficult to assimilate neural networks to simple physical systems. However, the analogy with Thermodynamics and the introduction of a well defined temperature leads us to an interesting result: if we eliminate from a CNN the "hottest" filters, the performance of the model remains the same, whereas, if we eliminate the "coldest" ones, the performance gets drastically worst. This result could be exploited in the realization of a training loop which eliminates the filters that do not contribute to loss reduction. In this way, the computational cost of the training will be lightened and more importantly this would be done by following a physical model. In any case, beside important practical applications, our analysis proves that a new and improved modeling of Deep Learning systems can pave the way to new and more efficient algorithms.
Resumo:
Since the majority of the population of the world lives in cities and that this number is expected to increase in the next years, one of the biggest challenges of the research is the determination of the risk deriving from high temperatures experienced in urban areas, together with improving responses to climate-related disasters, for example by introducing in the urban context vegetation or built infrastructures that can improve the air quality. In this work, we will investigate how different setups of the boundary and initial conditions set on an urban canyon generate different patterns of the dispersion of a pollutant. To do so we will exploit the low computational cost of Reynolds-Averaged Navier-Stokes (RANS) simulations to reproduce the dynamics of an infinite array of two-dimensional square urban canyons. A pollutant is released at the street level to mimic the presence of traffic. RANS simulations are run using the k-ɛ closure model and vertical profiles of significant variables of the urban canyon, namely the velocity, the turbulent kinetic energy, and the concentration, are represented. This is done using the open-source software OpenFOAM and modifying the standard solver simpleFoam to include the concentration equation and the temperature by introducing a buoyancy term in the governing equations. The results of the simulation are validated with experimental results and products of Large-Eddy Simulations (LES) from previous works showing that the simulation is able to reproduce all the quantities under examination with satisfactory accuracy. Moreover, this comparison shows that despite LES are known to be more accurate albeit more expensive, RANS simulations represent a reliable tool if a smaller computational cost is needed. Overall, this work exploits the low computational cost of RANS simulations to produce multiple scenarios useful to evaluate how the dispersion of a pollutant changes by a modification of key variables, such as the temperature.