876 resultados para 2447: modelling and forecasting


Relevância:

100.00% 100.00%

Publicador:

Resumo:

ABSTRACT Il presente lavoro vuole introdurre la problematica del rigonfiamento del terreno a seguito di grandi scavi in argilla. Il sollevamento del terreno dopo lo scavo può passare inosservato ma sono numerosi i casi in cui il rigonfiamento dura per molti anni e addirittura decenni, Shell Centre, London, Lion Yard, Cambridge, Bell Common, London, ecc. Questo rigonfiamento il più delle volte è impedito dalla presenza di fondazioni, si genera quindi una pressione distribuita che se non considerata in fase di progetto può portare alla fessurazione della fondazione stessa. L’anima del progetto è la modellazione e l’analisi del rigonfiamento di grandi scavi in argilla, confrontando poi i risultati con i dati reali disponibili in letteratura. L’idea del progetto nasce dalla difficoltà di ottenere stime e previsioni attendibili del rigonfiamento a seguito di grandi scavi in argilla sovraconsolidata. Inizialmente ho esaminato la teoria e i fattori che influenzano il grado e la velocità del rigonfiamento, quali la rigidezza, permeabilità, fessurazione, struttura del suolo, etc. In seguito ho affrontato lo studio del comportamento rigonfiante di argille sovraconsolidate a seguito di scarico tensionale (scavi), si è evidenziata l’importanza di differenziare il rigonfiamento primario e il rigonfiamento secondario dovuto al fenomeno del creep. Il tema centrale del progetto è l’analisi numerica tramite Flac di due grandi scavi in argilla, Lion Yard, Cambridge, e, Bell Common, London. Attraverso una dettagliata analisi parametrica sono riuscito a trovare i migliori parametri che modellano il comportamento reale nei due casi in esame, in questo modo è possibile arrivare a stime e previsioni attendibili del fenomeno rigonfiante del terreno a seguito di grandi scavi. Gli scavi modellati Lion Yard e Bell Common sono rispettivamente in Gault Clay e London Clay, grazie a famosi recenti articoli scientifici sono riuscito a evidenziare la principali propietà che diversificano i due terreni in esame, tali propietà sono estremamente differenti dalle normali caratteristiche considerate per la progettazione in presenza di terreno argilloso; sono così riuscito a implementare i migliori parametri per descrivere il comportamento dei due terreni nei diversi modelli. Ho inoltre studiato l’interazione terreno-struttura, la pressione esercitata dal rigonfiamento del terreno è strettamente funzione delle caratteristiche di connesione tra fondazione superficiale e muro di sostegno, tale pressione non deve essere ignorata in fase progettuale poichè può raggiungere importanti valori. Nello scavo di Lion Yard, considerando la presenza delle fondazioni profonde ho evidenziato il fatto che il rigonfiamento crea una forza distribuita di taglio tra i pali di fondazione ed il terreno, anche tale sollecitazione dovrebbe essere considerata ai fini della progettazione. La problematica non si ferma solo sull’interazione terreno-fondazioni, infatti durante gli scavi di importanti fondazioni londinesi lo scarico tensionale ha creato uno spostamento significativo positivo verso la superfice di tratti di tunnel della metropolita, questo fenomeno può creare seri problemi di sicurezza nella rete dei trasporti pubblici. Infine sono stati messi a confronto i risultati del programma Flac con quelli di metodi semplificati, ho trovato che utilizzando il metodo iterativo di O’Brien i risultati sono simili alla realtà e il tempo di calcolo è molto inferiore di quello richiesto utilizzando Flac, 2-3 giorni. In conclusione posso affermare che grazie ad una dettagliata analisi parametrica è stato possibile stimare il rigonfiamento del terreno, argilla sovraconsolidata, nei due casi analizzati.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of the work is to conduct a finite element model analysis on a small – size concrete beam and on a full size concrete beam internally reinforced with BFRP exposed at elevated temperatures. Experimental tests performed at Kingston University have been used to compare the results from the numerical analysis for the small – size concrete beam. Once the behavior of the small – size beam at room temperature is investigated and switching to the heating phase reinforced beams are tested at 100°C, 200°C and 300°C in loaded condition. The aim of the finite element analysis is to reflect the three – point bending test adopted into the oven during the exposure of the beam at room temperature and at elevated temperatures. Performance and deformability of reinforced beams are straightly correlated to the material properties and a wide analysis on elastic modulus and coefficient of thermal expansion is given in this work. Develop a good correlation between the numerical model and the experimental test is the main objective of the analysis on the small – size concrete beam, for both modelling the aim is also to estimate which is the deterioration of the material properties due to the heating process and the influence of different parameters on the final result. The focus of the full – size modelling which involved the last part of this work is to evaluate the effect of elevated temperatures, the material deterioration and the deflection trend on a reinforced beam characterized by a different size. A comparison between the results from different modelling has been developed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The diagnosis, grading and classification of tumours has benefited considerably from the development of DCE-MRI which is now essential to the adequate clinical management of many tumour types due to its capability in detecting active angiogenesis. Several strategies have been proposed for DCE-MRI evaluation. Visual inspection of contrast agent concentration curves vs time is a very simple yet operator dependent procedure, therefore more objective approaches have been developed in order to facilitate comparison between studies. In so called model free approaches, descriptive or heuristic information extracted from time series raw data have been used for tissue classification. The main issue concerning these schemes is that they have not a direct interpretation in terms of physiological properties of the tissues. On the other hand, model based investigations typically involve compartmental tracer kinetic modelling and pixel-by-pixel estimation of kinetic parameters via non-linear regression applied on region of interests opportunely selected by the physician. This approach has the advantage to provide parameters directly related to the pathophysiological properties of the tissue such as vessel permeability, local regional blood flow, extraction fraction, concentration gradient between plasma and extravascular-extracellular space. Anyway, nonlinear modelling is computational demanding and the accuracy of the estimates can be affected by the signal-to-noise ratio and by the initial solutions. The principal aim of this thesis is investigate the use of semi-quantitative and quantitative parameters for segmentation and classification of breast lesion. The objectives can be subdivided as follow: describe the principal techniques to evaluate time intensity curve in DCE-MRI with focus on kinetic model proposed in literature; to evaluate the influence in parametrization choice for a classic bi-compartmental kinetic models; to evaluate the performance of a method for simultaneous tracer kinetic modelling and pixel classification; to evaluate performance of machine learning techniques training for segmentation and classification of breast lesion.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Reliable electronic systems, namely a set of reliable electronic devices connected to each other and working correctly together for the same functionality, represent an essential ingredient for the large-scale commercial implementation of any technological advancement. Microelectronics technologies and new powerful integrated circuits provide noticeable improvements in performance and cost-effectiveness, and allow introducing electronic systems in increasingly diversified contexts. On the other hand, opening of new fields of application leads to new, unexplored reliability issues. The development of semiconductor device and electrical models (such as the well known SPICE models) able to describe the electrical behavior of devices and circuits, is a useful means to simulate and analyze the functionality of new electronic architectures and new technologies. Moreover, it represents an effective way to point out the reliability issues due to the employment of advanced electronic systems in new application contexts. In this thesis modeling and design of both advanced reliable circuits for general-purpose applications and devices for energy efficiency are considered. More in details, the following activities have been carried out: first, reliability issues in terms of security of standard communication protocols in wireless sensor networks are discussed. A new communication protocol is introduced, allows increasing the network security. Second, a novel scheme for the on-die measurement of either clock jitter or process parameter variations is proposed. The developed scheme can be used for an evaluation of both jitter and process parameter variations at low costs. Then, reliability issues in the field of “energy scavenging systems” have been analyzed. An accurate analysis and modeling of the effects of faults affecting circuit for energy harvesting from mechanical vibrations is performed. Finally, the problem of modeling the electrical and thermal behavior of photovoltaic (PV) cells under hot-spot condition is addressed with the development of an electrical and thermal model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, the Generalized Beam Theory (GBT) is used as the main tool to analyze the mechanics of thin-walled beams. After an introduction to the subject and a quick review of some of the most well-known approaches to describe the behaviour of thin-walled beams, a novel formulation of the GBT is presented. This formulation contains the classic shear-deformable GBT available in the literature and contributes an additional description of cross-section warping that is variable along the wall thickness besides along the wall midline. Shear deformation is introduced in such a way that the classical shear strain components of the Timoshenko beam theory are recovered exactly. According to the new kinematics proposed, a reviewed form of the cross-section analysis procedure is devised, based on a unique modal decomposition. Later, a procedure for a posteriori reconstruction of all the three-dimensional stress components in the finite element analysis of thin-walled beams using the GBT is presented. The reconstruction is simple and based on the use of three-dimensional equilibrium equations and of the RCP procedure. Finally, once the stress reconstruction procedure is presented, a study of several existing issues on the constitutive relations in the GBT is carried out. Specifically, a constitutive law based on mirroring the kinematic constraints of the GBT model into a specific stress field assumption is proposed. It is shown that this method is equally valid for isotropic and orthotropic beams and coincides with the conventional GBT approach available in the literature. Later on, an analogous procedure is presented for the case of laminated beams. Lastly, as a way to improve an inherently poor description of shear deformability in the GBT, the introduction of shear correction factors is proposed. Throughout this work, numerous examples are provided to determine the validity of all the proposed contributions to the field.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis is focused on Smart Grid applications in medium voltage distribution networks. For the development of new applications it appears useful the availability of simulation tools able to model dynamic behavior of both the power system and the communication network. Such a co-simulation environment would allow the assessment of the feasibility of using a given network technology to support communication-based Smart Grid control schemes on an existing segment of the electrical grid and to determine the range of control schemes that different communications technologies can support. For this reason, is presented a co-simulation platform that has been built by linking the Electromagnetic Transients Program Simulator (EMTP v3.0) with a Telecommunication Network Simulator (OPNET-Riverbed v18.0). The simulator is used to design and analyze a coordinate use of Distributed Energy Resources (DERs) for the voltage/var control (VVC) in distribution network. This thesis is focused control structure based on the use of phase measurement units (PMUs). In order to limit the required reinforcements of the communication infrastructures currently adopted by Distribution Network Operators (DNOs), the study is focused on leader-less MAS schemes that do not assign special coordinating rules to specific agents. Leader-less MAS are expected to produce more uniform communication traffic than centralized approaches that include a moderator agent. Moreover, leader-less MAS are expected to be less affected by limitations and constraint of some communication links. The developed co-simulator has allowed the definition of specific countermeasures against the limitations of the communication network, with particular reference to the latency and loss and information, for both the case of wired and wireless communication networks. Moreover, the co-simulation platform has bee also coupled with a mobility simulator in order to study specific countermeasures against the negative effects on the medium voltage/current distribution network caused by the concurrent connection of electric vehicles.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Systems Biology is an innovative way of doing biology recently raised in bio-informatics contexts, characterised by the study of biological systems as complex systems with a strong focus on the system level and on the interaction dimension. In other words, the objective is to understand biological systems as a whole, putting on the foreground not only the study of the individual parts as standalone parts, but also of their interaction and of the global properties that emerge at the system level by means of the interaction among the parts. This thesis focuses on the adoption of multi-agent systems (MAS) as a suitable paradigm for Systems Biology, for developing models and simulation of complex biological systems. Multi-agent system have been recently introduced in informatics context as a suitabe paradigm for modelling and engineering complex systems. Roughly speaking, a MAS can be conceived as a set of autonomous and interacting entities, called agents, situated in some kind of nvironment, where they fruitfully interact and coordinate so as to obtain a coherent global system behaviour. The claim of this work is that the general properties of MAS make them an effective approach for modelling and building simulations of complex biological systems, following the methodological principles identified by Systems Biology. In particular, the thesis focuses on cell populations as biological systems. In order to support the claim, the thesis introduces and describes (i) a MAS-based model conceived for modelling the dynamics of systems of cells interacting inside cell environment called niches. (ii) a computational tool, developed for implementing the models and executing the simulations. The tool is meant to work as a kind of virtual laboratory, on top of which kinds of virtual experiments can be performed, characterised by the definition and execution of specific models implemented as MASs, so as to support the validation, falsification and improvement of the models through the observation and analysis of the simulations. A hematopoietic stem cell system is taken as reference case study for formulating a specific model and executing virtual experiments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

L’azoto è uno dei prodotti principali dell’industria chimica, utilizzato principalmente per assicurare un sicuro stoccaggio di composti infiammabili. Generatori con sistemi PSA sono spesso più economici della tradizionale distillazione criogenica. I processi PSA utilizzano una colonna a letto fisso, riempita con materiale adsorbente, che adsorbe selettivamente un componente da una miscela gassosa. L’ossigeno diffonde molto più velocemente dell'azoto nei pori di setacci molecolari carboniosi. Oltre ad un ottimo materiale adsorbente, anche il design è fondamentale per la performance di un processo PSA. La fase di adsorbimento è seguita da una fase di desorbimento. Il materiale adsorbente può essere quindi riutilizzato nel ciclo seguente. L’assenza di un simulatore di processo ha reso necessario l’uso di dati sperimentali per sviluppare nuovi processi. Un tale approccio è molto costoso e lungo. Una modellazione e simulazione matematica, che consideri tutti i fenomeni di trasporto, è richiesta per una migliore comprensione dell'adsorbente sia per l'ottimizzazione del processo. La dinamica della colonna richiede la soluzione di insiemi di PDE distribuite nel tempo e nello spazio. Questo lavoro è stato svolto presso l'Università di Scienze Applicate - Münster, Germania. Argomento di questa tesi è la modellazione e simulazione di un impianto PSA per la produzione di azoto con il simulatore di processo Aspen Adsorption con l’obiettivo di permettere in futuro ottimizzazioni di processo affidabili, attendibili ed economiche basate su computazioni numeriche. E' discussa l’ottimizzazione di parametri, dati cinetici, termodinamici e di equilibrio. Il modello è affidabile, rigoroso e risponde adeguatamente a diverse condizioni al contorno. Tuttavia non è ancora pienamente soddisfacente poiché manca una rappresentazione adeguata della cinetica ovvero dei fenomeni di trasporto di materia. La messa a punto del software permetterà in futuro di indagare velocemente nuove possibilità di operazione.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This is the second part of a study investigating a model-based transient calibration process for diesel engines. The first part addressed the data requirements and data processing required for empirical transient emission and torque models. The current work focuses on modelling and optimization. The unexpected result of this investigation is that when trained on transient data, simple regression models perform better than more powerful methods such as neural networks or localized regression. This result has been attributed to extrapolation over data that have estimated rather than measured transient air-handling parameters. The challenges of detecting and preventing extrapolation using statistical methods that work well with steady-state data have been explained. The concept of constraining the distribution of statistical leverage relative to the distribution of the starting solution to prevent extrapolation during the optimization process has been proposed and demonstrated. Separate from the issue of extrapolation is preventing the search from being quasi-static. Second-order linear dynamic constraint models have been proposed to prevent the search from returning solutions that are feasible if each point were run at steady state, but which are unrealistic in a transient sense. Dynamic constraint models translate commanded parameters to actually achieved parameters that then feed into the transient emission and torque models. Combined model inaccuracies have been used to adjust the optimized solutions. To frame the optimization problem within reasonable dimensionality, the coefficients of commanded surfaces that approximate engine tables are adjusted during search iterations, each of which involves simulating the entire transient cycle. The resulting strategy, different from the corresponding manual calibration strategy and resulting in lower emissions and efficiency, is intended to improve rather than replace the manual calibration process.