399 resultados para Solver
Resumo:
This study presents the procedure followed to make a prediction of the critical flutter speed for a composite UAV wing. At the beginning of the study, there was no information available on the materials used for the construction of the wing, and the wing internal structure was unknown. Ground vibration tests were performed in order to detect the structure’s natural frequencies and mode shapes. From tests, it was found that the wing possesses a high stiffness, presenting well separated first bending and torsional natural frequencies. Two finite element models were developed and matched to experimental results. It has been necessary to introduce some assumptions, due to the uncertainties regarding the structure. The matching process was based on natural frequencies’ sensitivity with respect to a change in the mechanical properties of the materials. Once experimental results were met, average material properties were also found. Aerodynamic coefficients for the wing were obtained by means of a CFD software. The same analysis was also conducted when the wing is deformed in its first four mode shapes. A first approximation for flutter critical speed was made with the classical V - g technique. Finally, wing’s aeroelastic behavior was simulated using a coupled CFD/CSD method, obtaining a more accurate flutter prediction. The CSD solver is based on the time integration of modal dynamic equations, requiring the extraction of mode shapes from the previously performed finite-element analysis. Results show that flutter onset is not a risk for the UAV, occurring at velocities well beyond its operative range.
Resumo:
Process systems design, operation and synthesis problems under uncertainty can readily be formulated as two-stage stochastic mixed-integer linear and nonlinear (nonconvex) programming (MILP and MINLP) problems. These problems, with a scenario based formulation, lead to large-scale MILPs/MINLPs that are well structured. The first part of the thesis proposes a new finitely convergent cross decomposition method (CD), where Benders decomposition (BD) and Dantzig-Wolfe decomposition (DWD) are combined in a unified framework to improve the solution of scenario based two-stage stochastic MILPs. This method alternates between DWD iterations and BD iterations, where DWD restricted master problems and BD primal problems yield a sequence of upper bounds, and BD relaxed master problems yield a sequence of lower bounds. A variant of CD, which includes multiple columns per iteration of DW restricted master problem and multiple cuts per iteration of BD relaxed master problem, called multicolumn-multicut CD is then developed to improve solution time. Finally, an extended cross decomposition method (ECD) for solving two-stage stochastic programs with risk constraints is proposed. In this approach, a CD approach at the first level and DWD at a second level is used to solve the original problem to optimality. ECD has a computational advantage over a bilevel decomposition strategy or solving the monolith problem using an MILP solver. The second part of the thesis develops a joint decomposition approach combining Lagrangian decomposition (LD) and generalized Benders decomposition (GBD), to efficiently solve stochastic mixed-integer nonlinear nonconvex programming problems to global optimality, without the need for explicit branch and bound search. In this approach, LD subproblems and GBD subproblems are systematically solved in a single framework. The relaxed master problem obtained from the reformulation of the original problem, is solved only when necessary. A convexification of the relaxed master problem and a domain reduction procedure are integrated into the decomposition framework to improve solution efficiency. Using case studies taken from renewable resource and fossil-fuel based application in process systems engineering, it can be seen that these novel decomposition approaches have significant benefit over classical decomposition methods and state-of-the-art MILP/MINLP global optimization solvers.
Resumo:
Kenia liegt in den Äquatorialtropen von Ostafrika und ist als ein weltweiter Hot-Spot für Aflatoxinbelastung insbesondere bei Mais bekannt. Diese toxischen und karzinogenen Verbindungen sind Stoffwechselprodukte von Pilzen und so insbesondere von der Wasseraktivität abhängig. Diese beeinflusst sowohl die Trocknung als auch die Lagerfähigkeit von Nahrungsmitteln und ist somit ein wichtiger Faktor bei der Entwicklung von energieeffizienten und qualitätsorientierten Verarbeitungsprozessen. Die vorliegende Arbeit hat sich zum Ziel gesetzt, die Veränderung der Wasseraktivität während der konvektiven Trocknung von Mais zu untersuchen. Mittels einer Optimierungssoftware (MS Excel Solver) wurde basierend auf sensorerfassten thermo-hygrometrischen Daten der gravimetrische Feuchteverlust von Maiskolben bei 37°C, 43°C und 53°C vorausberechnet. Dieser Bereich stellt den Übergang zwischen Niedrig- und Hochtemperaturtrocknung dar. Die Ergebnisse zeigen deutliche Unterschiede im Verhalten der Körner und der Spindel. Die Trocknung im Bereich von 35°C bis 45°C kombiniert mit hohen Strömungsgeschwindigkeiten (> 1,5 m / s) begünstigte die Trocknung der Körner gegenüber der Spindel und kann daher für eine energieeffiziente Trocknung von Kolben mit hohem Anfangsfeuchtegehalt empfohlen werden. Weitere Untersuchungen wurden zum Verhalten unterschiedlicher Schüttungen bei der bei Mais üblichen Satztrocknung durchgeführt. Entlieschter und gedroschener Mais führte zu einem vergrößerten Luftwiderstand in der Schüttung und sowohl zu einem höheren Energiebedarf als auch zu ungleichmäßigerer Trocknung, was nur durch einen erhöhten technischen Aufwand etwa durch Mischeinrichtungen oder Luftumkehr behoben werden könnte. Aufgrund des geringeren Aufwandes für die Belüftung und die Kontrolle kann für kleine landwirtschaftliche Praxisbetriebe in Kenia daher insbesondere die Trocknung ganzer Kolben in ungestörten Schüttungen empfohlen werden. Weiterhin wurde in der Arbeit die Entfeuchtung mittels eines Trockenmittels (Silikagel) kombiniert mit einer Heizquelle und abgegrenztem Luftvolumen untersucht und der konventionellen Trocknung gegenüber gestellt. Die Ergebnisse zeigten vergleichbare Entfeuchtungsraten während der ersten 5 Stunden der Trocknung. Der jeweilige Luftzustand bei Verwendung von Silikagel wurde insbesondere durch das eingeschlossene Luftvolumen und die Temperatur beeinflusst. Granulierte Trockenmittel sind bei der Maistrocknung unter hygienischen Gesichtspunkten vorteilhaft und können beispielsweise mit einfachen Öfen regeneriert werden, so dass Qualitätsbeeinträchtigungen wie bei Hochtemperatur- oder auch Freilufttrocknung vermieden werden können. Eine hochwertige Maistrocknungstechnik ist sehr kapitalintensiv. Aus der vorliegenden Arbeit kann aber abgeleitet werden, dass einfache Verbesserungen wie eine sensorgestützte Belüftung von Satztrocknern, der Einsatz von Trockenmitteln und eine angepasste Schüttungshöhe praktikable Lösungen für Kleinbauern in Kenia sein können. Hierzu besteht, ggf. auch zum Aspekt der Verwendung regenerativer Energien, weiterer Forschungsbedarf.
Resumo:
The main focus of this work is to define a numerical methodology to simulate an aerospike engine and then to analyse the performance of DemoP1, which is a small aerospike demonstrator built by Pangea Aerospace. The aerospike is a promising solution to build more efficient engine than the actual one. Its main advantage is the expansion adaptation that allows to reach the optimal expansion in a wide range of ambient pressures delivering more thrust than an equivalent bell-shaped nozzle. The main drawbacks are the cooling system design and the spike manufacturing but nowadays, these issues seem to be overcome with the use of the additive manufacturing method. The simulations are performed with dbnsTurbFoam which is a solver of OpenFOAM. It has been designed to simulate a supersonic compressible turbulent flow. This work is divided in four chapters. The first one is a short introduction. The second one shows a brief summary of the theoretical performance of the aerospike. The third one introduces the numerical methodology to simulate a compressible supersonic flow. In the fourth chapter, the solver has been verified with an experiment found in literature. And in the fifth chapter, the simulations on DemoP1 engine are illustrated.
Resumo:
In recent years, developed countries have turned their attention to clean and renewable energy, such as wind energy and wave energy that can be converted to electrical power. Companies and academic groups worldwide are investigating several wave energy ideas today. Accordingly, this thesis studies the numerical simulation of the dynamic response of the wave energy converters (WECs) subjected to the ocean waves. This study considers a two-body point absorber (2BPA) and an oscillating surge wave energy converter (OSWEC). The first aim is to mesh the bodies of the earlier mentioned WECs to calculate their hydrostatic properties using axiMesh.m and Mesh.m functions provided by NEMOH. The second aim is to calculate the first-order hydrodynamic coefficients of the WECs using the NEMOH BEM solver and to study the ability of this method to eliminate irregular frequencies. The third is to generate a *.h5 file for 2BPA and OSWEC devices, in which all the hydrodynamic data are included. The BEMIO, a pre-and post-processing tool developed by WEC-Sim, is used in this study to create *.h5 files. The primary and final goal is to run the wave energy converter Simulator (WEC-Sim) to simulate the dynamic responses of WECs studied in this thesis and estimate their power performance at different sites located in the Mediterranean Sea and the North Sea. The hydrodynamic data obtained by the NEMOH BEM solver for the 2BPA and OSWEC devices studied in this thesis is imported to WEC-Sim using BEMIO. Lastly, the power matrices and annual energy production (AEP) of WECs are estimated for different sites located in the Sea of Sicily, Sea of Sardinia, Adriatic Sea, Tyrrhenian Sea, and the North Sea. To this end, the NEMOH and WEC-Sim are still the most practical tools to estimate the power generation of WECs numerically.
Resumo:
Many real-word decision- making problems are defined based on forecast parameters: for example, one may plan an urban route by relying on traffic predictions. In these cases, the conventional approach consists in training a predictor and then solving an optimization problem. This may be problematic since mistakes made by the predictor may trick the optimizer into taking dramatically wrong decisions. Recently, the field of Decision-Focused Learning overcomes this limitation by merging the two stages at training time, so that predictions are rewarded and penalized based on their outcome in the optimization problem. There are however still significant challenges toward a widespread adoption of the method, mostly related to the limitation in terms of generality and scalability. One possible solution for dealing with the second problem is introducing a caching-based approach, to speed up the training process. This project aims to investigate these techniques, in order to reduce even more, the solver calls. For each considered method, we designed a particular smart sampling approach, based on their characteristics. In the case of the SPO method, we ended up discovering that it is only necessary to initialize the cache with only several solutions; those needed to filter the elements that we still need to properly learn. For the Blackbox method, we designed a smart sampling approach, based on inferred solutions.
Resumo:
Combinatorial decision and optimization problems belong to numerous applications, such as logistics and scheduling, and can be solved with various approaches. Boolean Satisfiability and Constraint Programming solvers are some of the most used ones and their performance is significantly influenced by the model chosen to represent a given problem. This has led to the study of model reformulation methods, one of which is tabulation, that consists in rewriting the expression of a constraint in terms of a table constraint. To apply it, one should identify which constraints can help and which can hinder the solving process. So far this has been performed by hand, for example in MiniZinc, or automatically with manually designed heuristics, in Savile Row. Though, it has been shown that the performances of these heuristics differ across problems and solvers, in some cases helping and in others hindering the solving procedure. However, recent works in the field of combinatorial optimization have shown that Machine Learning (ML) can be increasingly useful in the model reformulation steps. This thesis aims to design a ML approach to identify the instances for which Savile Row’s heuristics should be activated. Additionally, it is possible that the heuristics miss some good tabulation opportunities, so we perform an exploratory analysis for the creation of a ML classifier able to predict whether or not a constraint should be tabulated. The results reached towards the first goal show that a random forest classifier leads to an increase in the performances of 4 different solvers. The experimental results in the second task show that a ML approach could improve the performance of a solver for some problem classes.
Resumo:
Modern High-Performance Computing HPC systems are gradually increasing in size and complexity due to the correspondent demand of larger simulations requiring more complicated tasks and higher accuracy. However, as side effects of the Dennard’s scaling approaching its ultimate power limit, the efficiency of software plays also an important role in increasing the overall performance of a computation. Tools to measure application performance in these increasingly complex environments provide insights into the intricate ways in which software and hardware interact. The monitoring of the power consumption in order to save energy is possible through processors interfaces like Intel Running Average Power Limit RAPL. Given the low level of these interfaces, they are often paired with an application-level tool like Performance Application Programming Interface PAPI. Since several problems in many heterogeneous fields can be represented as a complex linear system, an optimized and scalable linear system solver algorithm can decrease significantly the time spent to compute its resolution. One of the most widely used algorithms deployed for the resolution of large simulation is the Gaussian Elimination, which has its most popular implementation for HPC systems in the Scalable Linear Algebra PACKage ScaLAPACK library. However, another relevant algorithm, which is increasing in popularity in the academic field, is the Inhibition Method. This thesis compares the energy consumption of the Inhibition Method and Gaussian Elimination from ScaLAPACK to profile their execution during the resolution of linear systems above the HPC architecture offered by CINECA. Moreover, it also collates the energy and power values for different ranks, nodes, and sockets configurations. The monitoring tools employed to track the energy consumption of these algorithms are PAPI and RAPL, that will be integrated with the parallel execution of the algorithms managed with the Message Passing Interface MPI.
Resumo:
Since the majority of the population of the world lives in cities and that this number is expected to increase in the next years, one of the biggest challenges of the research is the determination of the risk deriving from high temperatures experienced in urban areas, together with improving responses to climate-related disasters, for example by introducing in the urban context vegetation or built infrastructures that can improve the air quality. In this work, we will investigate how different setups of the boundary and initial conditions set on an urban canyon generate different patterns of the dispersion of a pollutant. To do so we will exploit the low computational cost of Reynolds-Averaged Navier-Stokes (RANS) simulations to reproduce the dynamics of an infinite array of two-dimensional square urban canyons. A pollutant is released at the street level to mimic the presence of traffic. RANS simulations are run using the k-ɛ closure model and vertical profiles of significant variables of the urban canyon, namely the velocity, the turbulent kinetic energy, and the concentration, are represented. This is done using the open-source software OpenFOAM and modifying the standard solver simpleFoam to include the concentration equation and the temperature by introducing a buoyancy term in the governing equations. The results of the simulation are validated with experimental results and products of Large-Eddy Simulations (LES) from previous works showing that the simulation is able to reproduce all the quantities under examination with satisfactory accuracy. Moreover, this comparison shows that despite LES are known to be more accurate albeit more expensive, RANS simulations represent a reliable tool if a smaller computational cost is needed. Overall, this work exploits the low computational cost of RANS simulations to produce multiple scenarios useful to evaluate how the dispersion of a pollutant changes by a modification of key variables, such as the temperature.