7 resultados para stock order flow model

em AMS Tesi di Laurea - Alm@DL - Università di Bologna


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Electrical energy storage is a really important issue nowadays. As electricity is not easy to be directly stored, it can be stored in other forms and converted back to electricity when needed. As a consequence, storage technologies for electricity can be classified by the form of storage, and in particular we focus on electrochemical energy storage systems, better known as electrochemical batteries. Largely the more widespread batteries are the Lead-Acid ones, in the two main types known as flooded and valve-regulated. Batteries need to be present in many important applications such as in renewable energy systems and in motor vehicles. Consequently, in order to simulate these complex electrical systems, reliable battery models are needed. Although there exist some models developed by experts of chemistry, they are too complex and not expressed in terms of electrical networks. Thus, they are not convenient for a practical use by electrical engineers, who need to interface these models with other electrical systems models, usually described by means of electrical circuits. There are many techniques available in literature by which a battery can be modeled. Starting from the Thevenin based electrical model, it can be adapted to be more reliable for Lead-Acid battery type, with the addition of a parasitic reaction branch and a parallel network. The third-order formulation of this model can be chosen, being a trustworthy general-purpose model, characterized by a good ratio between accuracy and complexity. Considering the equivalent circuit network, all the useful equations describing the battery model are discussed, and then implemented one by one in Matlab/Simulink. The model has been finally validated, and then used to simulate the battery behaviour in different typical conditions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sub-grid scale (SGS) models are required in order to model the influence of the unresolved small scales on the resolved scales in large-eddy simulations (LES), the flow at the smallest scales of turbulence. In the following work two SGS models are presented and deeply analyzed in terms of accuracy through several LESs with different spatial resolutions, i.e. grid spacings. The first part of this thesis focuses on the basic theory of turbulence, the governing equations of fluid dynamics and their adaptation to LES. Furthermore, two important SGS models are presented: one is the Dynamic eddy-viscosity model (DEVM), developed by \cite{germano1991dynamic}, while the other is the Explicit Algebraic SGS model (EASSM), by \cite{marstorp2009explicit}. In addition, some details about the implementation of the EASSM in a Pseudo-Spectral Navier-Stokes code \cite{chevalier2007simson} are presented. The performance of the two aforementioned models will be investigated in the following chapters, by means of LES of a channel flow, with friction Reynolds numbers $Re_\tau=590$ up to $Re_\tau=5200$, with relatively coarse resolutions. Data from each simulation will be compared to baseline DNS data. Results have shown that, in contrast to the DEVM, the EASSM has promising potentials for flow predictions at high friction Reynolds numbers: the higher the friction Reynolds number is the better the EASSM will behave and the worse the performances of the DEVM will be. The better performance of the EASSM is contributed to the ability to capture flow anisotropy at the small scales through a correct formulation for the SGS stresses. Moreover, a considerable reduction in the required computational resources can be achieved using the EASSM compared to DEVM. Therefore, the EASSM combines accuracy and computational efficiency, implying that it has a clear potential for industrial CFD usage.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nonostante lo scetticismo di molti studiosi circa la possibilità di prevedere l'andamento della borsa valori, esistono svariate teorie ipotizzanti la possibilità di utilizzare le informazioni conosciute per predirne i movimenti futuri. L’avvento dell’intelligenza artificiale nella seconda parte dello scorso secolo ha permesso di ottenere risultati rivoluzionari in svariati ambiti, tanto che oggi tale disciplina trova ampio impiego nella nostra vita quotidiana in molteplici forme. In particolare, grazie al machine learning, è stato possibile sviluppare sistemi intelligenti che apprendono grazie ai dati, riuscendo a modellare problemi complessi. Visto il successo di questi sistemi, essi sono stati applicati anche all’arduo compito di predire la borsa valori, dapprima utilizzando i dati storici finanziari della borsa come fonte di conoscenza, e poi, con la messa a punto di tecniche di elaborazione del linguaggio naturale umano (NLP), anche utilizzando dati in linguaggio naturale, come il testo di notizie finanziarie o l’opinione degli investitori. Questo elaborato ha l’obiettivo di fornire una panoramica sull’utilizzo delle tecniche di machine learning nel campo della predizione del mercato azionario, partendo dalle tecniche più elementari per arrivare ai complessi modelli neurali che oggi rappresentano lo stato dell’arte. Vengono inoltre formalizzati il funzionamento e le tecniche che si utilizzano per addestrare e valutare i modelli di machine learning, per poi effettuare un esperimento in cui a partire da dati finanziari e soprattutto testuali si tenterà di predire correttamente la variazione del valore dell’indice di borsa S&P 500 utilizzando un language model basato su una rete neurale.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La gestione delle risorse idriche è una questione fondamentale da affrontare alla luce di problemi sempre più attuali quali la scarsità della risorsa in determinati periodi dell’anno e la tutela dei corpi idrici rispetto ad approvvigionamenti che ne possano intaccare il naturale equilibrio. In effetti, gli ultimi anni sono stati caratterizzati da episodi di magra piuttosto rilevanti e con pochi paragoni nell’intero novecento...

Relevância:

40.00% 40.00%

Publicador:

Resumo:

One of the biggest challenges that contaminant hydrogeology is facing, is how to adequately address the uncertainty associated with model predictions. Uncertainty arise from multiple sources, such as: interpretative error, calibration accuracy, parameter sensitivity and variability. This critical issue needs to be properly addressed in order to support environmental decision-making processes. In this study, we perform Global Sensitivity Analysis (GSA) on a contaminant transport model for the assessment of hydrocarbon concentration in groundwater. We provide a quantification of the environmental impact and, given the incomplete knowledge of hydrogeological parameters, we evaluate which are the most influential, requiring greater accuracy in the calibration process. Parameters are treated as random variables and a variance-based GSA is performed in a optimized numerical Monte Carlo framework. The Sobol indices are adopted as sensitivity measures and they are computed by employing meta-models to characterize the migration process, while reducing the computational cost of the analysis. The proposed methodology allows us to: extend the number of Monte Carlo iterations, identify the influence of uncertain parameters and lead to considerable saving computational time obtaining an acceptable accuracy.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The first goal of this study is to analyse a real-world multiproduct onshore pipeline system in order to verify its hydraulic configuration and operational feasibility by constructing a simulation model step by step from its elementary building blocks that permits to copy the operation of the real system as precisely as possible. The second goal is to develop this simulation model into a user-friendly tool that one could use to find an “optimal” or “best” product batch schedule for a one year time period. Such a batch schedule could change dynamically as perturbations occur during operation that influence the behaviour of the entire system. The result of the simulation, the ‘best’ batch schedule is the one that minimizes the operational costs in the system. The costs involved in the simulation are inventory costs, interface costs, pumping costs, and penalty costs assigned to any unforeseen situations. The key factor to determine the performance of the simulation model is the way time is represented. In our model an event based discrete time representation is selected as most appropriate for our purposes. This means that the time horizon is divided into intervals of unequal lengths based on events that change the state of the system. These events are the arrival/departure of the tanker ships, the openings and closures of loading/unloading valves of storage tanks at both terminals, and the arrivals/departures of trains/trucks at the Delivery Terminal. In the feasibility study we analyse the system’s operational performance with different Head Terminal storage capacity configurations. For these alternative configurations we evaluated the effect of different tanker ship delay magnitudes on the number of critical events and product interfaces generated, on the duration of pipeline stoppages, the satisfaction of the product demand and on the operative costs. Based on the results and the bottlenecks identified, we propose modifications in the original setup.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Graphite is a mineral commodity used as anode for lithium-ion batteries (LIBs), and its global demand is doomed to increase significantly in the future due to the forecasted global market demand of electric vehicles. Currently, the graphite used to produce LIBs is a mix of synthetic and natural graphite. The first one is produced by the crystallization of petroleum by-products and the second comes from mining, which causes threats related to pollution, social acceptance, and health. This MSc work has the objective of determining compositional and textural characteristics of natural, synthetic, and recycled graphite by using SEM-EDS, XRF, XRD, and TEM analytical techniques and couple these data with dynamic Material Flow Analysis (MFA) models, which have the objective of predicting the future global use of graphite in order to test the hypothesis that natural graphite will no longer be used in the LIB market globally. The mineral analyses reveal that the synthetic graphite samples contain less impurities than the natural graphite, which has a rolled internal structure similar to the recycled one. However, recycled graphite shows fractures and discontinuities of the graphene layers caused by the recycling process, but its rolled internal structure can help the Li-ions’ migration through the fractures. Three dynamic MFA studies have been conducted to test distinct scenarios that include graphite recycling in the period 2022-2050 and it emerges that - irrespective of any considered scenario - there will be an increase of synthetic graphite demand, caused by the limited stocks of battery scrap available. Hence, I conclude that both natural and recycled graphite is doomed to be used in the LIB market in the future, at least until the year 2050 when the stock of recycled graphite production will be enough to supersede natural graphite. In addition, some new improvement in the dismantling and recycling processes are necessary to improve the quality of recycled graphite.