7 resultados para empirical methods
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
The scope of the thesis is to broaden the knowledge about axially loaded pipe piles, that can play as foundations for offshore wind turbines based on jacket structures. The goal of the work was pursued by interpreting experimental data on large-scale model piles and by developing numerical tools for the prediction of their monotonic response to tensile and compressive loads to failure. The availability of experimental results on large scale model piles produced in two different campaigns at Fraunhofer IWES (Hannover, Germany) represented the reference for the whole work. Data from CPTs, blow counts during installation and load-displacement curves allowed to develop considerations on the experimental results and comparison with empirical methods from literature, such as CPT-based methods and Load Transfer methods. The understanding of soil-structure interaction mechanisms has been involved in the study in order to better assess the mechanical response of the sand with the scope to help in developing predictive tools of the experiments. A lack of information on the response of Rohsand 3152 when in contact with steel was highlighted, so the necessity of better assessing its response was fulfilled with a comprehensive campaign of interface shear test. It was found how the response of the sand to ultimate conditions evolve with the roughness of the steel, which is a precious information to take account of when attempting the prediction of a pile capacity. Parallel to this topic, the work has developed a numerical modelling procedure that was validated on the available large-scale model piles at IWES. The modelling strategy is intended to build a FE model whose mechanical properties of the sand come from an interpretation of commonly available geotechnical tests. The results of the FE model were compared with other predictive tools currently used in the engineering practice.
Resumo:
Computer aided design of Monolithic Microwave Integrated Circuits (MMICs) depends critically on active device models that are accurate, computationally efficient, and easily extracted from measurements or device simulators. Empirical models of active electron devices, which are based on actual device measurements, do not provide a detailed description of the electron device physics. However they are numerically efficient and quite accurate. These characteristics make them very suitable for MMIC design in the framework of commercially available CAD tools. In the empirical model formulation it is very important to separate linear memory effects (parasitic effects) from the nonlinear effects (intrinsic effects). Thus an empirical active device model is generally described by an extrinsic linear part which accounts for the parasitic passive structures connecting the nonlinear intrinsic electron device to the external world. An important task circuit designers deal with is evaluating the ultimate potential of a device for specific applications. In fact once the technology has been selected, the designer would choose the best device for the particular application and the best device for the different blocks composing the overall MMIC. Thus in order to accurately reproducing the behaviour of different-in-size devices, good scalability properties of the model are necessarily required. Another important aspect of empirical modelling of electron devices is the mathematical (or equivalent circuit) description of the nonlinearities inherently associated with the intrinsic device. Once the model has been defined, the proper measurements for the characterization of the device are performed in order to identify the model. Hence, the correct measurement of the device nonlinear characteristics (in the device characterization phase) and their reconstruction (in the identification or even simulation phase) are two of the more important aspects of empirical modelling. This thesis presents an original contribution to nonlinear electron device empirical modelling treating the issues of model scalability and reconstruction of the device nonlinear characteristics. The scalability of an empirical model strictly depends on the scalability of the linear extrinsic parasitic network, which should possibly maintain the link between technological process parameters and the corresponding device electrical response. Since lumped parasitic networks, together with simple linear scaling rules, cannot provide accurate scalable models, either complicate technology-dependent scaling rules or computationally inefficient distributed models are available in literature. This thesis shows how the above mentioned problems can be avoided through the use of commercially available electromagnetic (EM) simulators. They enable the actual device geometry and material stratification, as well as losses in the dielectrics and electrodes, to be taken into account for any given device structure and size, providing an accurate description of the parasitic effects which occur in the device passive structure. It is shown how the electron device behaviour can be described as an equivalent two-port intrinsic nonlinear block connected to a linear distributed four-port passive parasitic network, which is identified by means of the EM simulation of the device layout, allowing for better frequency extrapolation and scalability properties than conventional empirical models. Concerning the issue of the reconstruction of the nonlinear electron device characteristics, a data approximation algorithm has been developed for the exploitation in the framework of empirical table look-up nonlinear models. Such an approach is based on the strong analogy between timedomain signal reconstruction from a set of samples and the continuous approximation of device nonlinear characteristics on the basis of a finite grid of measurements. According to this criterion, nonlinear empirical device modelling can be carried out by using, in the sampled voltage domain, typical methods of the time-domain sampling theory.
Resumo:
Bread dough and particularly wheat dough, due to its viscoelastic behaviour, is probably the most dynamic and complicated rheological system and its characteristics are very important since they highly affect final products’ textural and sensorial properties. The study of dough rheology has been a very challenging task for many researchers since it can provide numerous information about dough formulation, structure and processing. This explains why dough rheology has been a matter of investigation for several decades. In this research rheological assessment of doughs and breads was performed by using empirical and fundamental methods at both small and large deformation, in order to characterize different types of doughs and final products such as bread. In order to study the structural aspects of food products, image analysis techniques was used for the integration of the information coming from empirical and fundamental rheological measurements. Evaluation of dough properties was carried out by texture profile analysis (TPA), dough stickiness (Chen and Hoseney cell) and uniaxial extensibility determination (Kieffer test) by using a Texture Analyser; small deformation rheological measurements, were performed on a controlled stress–strain rheometer; moreover the structure of different doughs was observed by using the image analysis; while bread characteristics were studied by using texture profile analysis (TPA) and image analysis. The objective of this research was to understand if the different rheological measurements were able to characterize and differentiate the different samples analysed. This in order to investigate the effect of different formulation and processing conditions on dough and final product from a structural point of view. For this aim the following different materials were performed and analysed: - frozen dough realized without yeast; - frozen dough and bread made with frozen dough; - doughs obtained by using different fermentation method; - doughs made by Kamut® flour; - dough and bread realized with the addition of ginger powder; - final products coming from different bakeries. The influence of sub-zero storage time on non-fermented and fermented dough viscoelastic performance and on final product (bread) was evaluated by using small deformation and large deformation methods. In general, the longer the sub-zero storage time the lower the positive viscoelastic attributes. The effect of fermentation time and of different type of fermentation (straight-dough method; sponge-and-dough procedure and poolish method) on rheological properties of doughs were investigated using empirical and fundamental analysis and image analysis was used to integrate this information throughout the evaluation of the dough’s structure. The results of fundamental rheological test showed that the incorporation of sourdough (poolish method) provoked changes that were different from those seen in the others type of fermentation. The affirmative action of some ingredients (extra-virgin olive oil and a liposomic lecithin emulsifier) to improve rheological characteristics of Kamut® dough has been confirmed also when subjected to low temperatures (24 hours and 48 hours at 4°C). Small deformation oscillatory measurements and large deformation mechanical tests performed provided useful information on the rheological properties of samples realized by using different amounts of ginger powder, showing that the sample with the highest amount of ginger powder (6%) had worse rheological characteristics compared to the other samples. Moisture content, specific volume, texture and crumb grain characteristics are the major quality attributes of bread products. The different sample analyzed, “Coppia Ferrarese”, “Pane Comune Romagnolo” and “Filone Terra di San Marino”, showed a decrease of crumb moisture and an increase in hardness over the storage time. Parameters such as cohesiveness and springiness, evaluated by TPA that are indicator of quality of fresh bread, decreased during the storage. By using empirical rheological tests we found several differences among the samples, due to the different ingredients used in formulation and the different process adopted to prepare the sample, but since these products are handmade, the differences could be account as a surplus value. In conclusion small deformation (in fundamental units) and large deformation methods showed a significant role in monitoring the influence of different ingredients used in formulation, different processing and storage conditions on dough viscoelastic performance and on final product. Finally the knowledge of formulation, processing and storage conditions together with the evaluation of structural and rheological characteristics is fundamental for the study of complex matrices like bakery products, where numerous variable can influence their final quality (e.g. raw material, bread-making procedure, time and temperature of the fermentation and baking).
Resumo:
The objective of this work of thesis is the refined estimations of source parameters. To such a purpose we used two different approaches, one in the frequency domain and the other in the time domain. In frequency domain, we analyzed the P- and S-wave displacement spectra to estimate spectral parameters, that is corner frequencies and low frequency spectral amplitudes. We used a parametric modeling approach which is combined with a multi-step, non-linear inversion strategy and includes the correction for attenuation and site effects. The iterative multi-step procedure was applied to about 700 microearthquakes in the moment range 1011-1014 N•m and recorded at the dense, wide-dynamic range, seismic networks operating in Southern Apennines (Italy). The analysis of the source parameters is often complicated when we are not able to model the propagation accurately. In this case the empirical Green function approach is a very useful tool to study the seismic source properties. In fact the Empirical Green Functions (EGFs) consent to represent the contribution of propagation and site effects to signal without using approximate velocity models. An EGF is a recorded three-component set of time-histories of a small earthquake whose source mechanism and propagation path are similar to those of the master event. Thus, in time domain, the deconvolution method of Vallée (2004) was applied to calculate the source time functions (RSTFs) and to accurately estimate source size and rupture velocity. This technique was applied to 1) large event, that is Mw=6.3 2009 L’Aquila mainshock (Central Italy), 2) moderate events, that is cluster of earthquakes of 2009 L’Aquila sequence with moment magnitude ranging between 3 and 5.6, 3) small event, i.e. Mw=2.9 Laviano mainshock (Southern Italy).
Resumo:
Le Social Street sono gruppi di vicini di casa che vogliono ricreare legami di convivialità avendo notato un indebolimento delle relazioni sociali nei loro quartieri. Nascono come gruppi online, tramite la piattaforma Facebook, per materializzarsi in incontri offline andando a costruire legami conviviali grazie pratiche di socialità, inclusività e gratuità. Questa Tesi ha come obiettivo l’analisi dei profili socio-demografici degli Streeter e dei quartieri coinvolti per comprendere come sia possibile creare convivialità e come la variabile urbana intervenga in questi processi. Inoltre, si vuole comprendere le dinamiche di attaccamento al quartiere, gli interessi portati avanti dagli Streeter, il loro profilo civico e il posizionamento di quest’esperienza rispetto all’associazionismo tradizionale. Per perseguire l’obiettivo della ricerca, sono state studiate le tre città che vedono la maggiore presenza di Social Street: Milano, Bologna, Roma. La ricerca ha previsto sia un’analisi degli Streeter grazie a un questionario online replicato in tutti i contesti. Inoltre, sono state realizzate 131 interviste ad amministratori e fondatori di Social Street e condotte osservazioni etnografiche e netnografiche. I risultati mostrano come gli Streeter siano appartenenti alle classi medio-alte, tra trenta e cinquanta anni, che hanno sperimentato la mobilità tra un quartiere e l’altro o tra diversi contesi nazionali ed internazionali e trovano nelle Social Street un modo per creare legami di vicinato che hanno perso nei loro trasferimenti. Gli stessi quartieri dove si diffondono le Social Street sono agiati e vi è una buona corrispondenza tra Streeter e modello della centralità sociale elaborato da Milbrath (1965) per cui anche la partecipazione civica è molto sentita tra gli aderenti alle Social Street. Il contributo di questa Tesi al dibattito sociologico risiede nell’aver offerto un’analisi empirica di un’azione collettiva a livello urbano, quella delle Social Street, mostrando come vi sia circolarità tra azione e contesto grazie all’azione mutualistica conviviale.
Resumo:
Deep Neural Networks (DNNs) have revolutionized a wide range of applications beyond traditional machine learning and artificial intelligence fields, e.g., computer vision, healthcare, natural language processing and others. At the same time, edge devices have become central in our society, generating an unprecedented amount of data which could be used to train data-hungry models such as DNNs. However, the potentially sensitive or confidential nature of gathered data poses privacy concerns when storing and processing them in centralized locations. To this purpose, decentralized learning decouples model training from the need of directly accessing raw data, by alternating on-device training and periodic communications. The ability of distilling knowledge from decentralized data, however, comes at the cost of facing more challenging learning settings, such as coping with heterogeneous hardware and network connectivity, statistical diversity of data, and ensuring verifiable privacy guarantees. This Thesis proposes an extensive overview of decentralized learning literature, including a novel taxonomy and a detailed description of the most relevant system-level contributions in the related literature for privacy, communication efficiency, data and system heterogeneity, and poisoning defense. Next, this Thesis presents the design of an original solution to tackle communication efficiency and system heterogeneity, and empirically evaluates it on federated settings. For communication efficiency, an original method, specifically designed for Convolutional Neural Networks, is also described and evaluated against the state-of-the-art. Furthermore, this Thesis provides an in-depth review of recently proposed methods to tackle the performance degradation introduced by data heterogeneity, followed by empirical evaluations on challenging data distributions, highlighting strengths and possible weaknesses of the considered solutions. Finally, this Thesis presents a novel perspective on the usage of Knowledge Distillation as a mean for optimizing decentralized learning systems in settings characterized by data heterogeneity or system heterogeneity. Our vision on relevant future research directions close the manuscript.
Resumo:
Knowledge graphs (KGs) and ontologies have been widely adopted for modelling numerous domains. However, understanding the content of an ontology/KG is far from straightforward: existing methods partially address this issue. This thesis is based on the assumption that identifying the Ontology Design Patterns (ODPs) in an ontology or a KG contributes to address this problem. Most times, the reused ODPs are not explicitly annotated, or their reuse is unintentional. Therefore, there is a challenge to automatically identify ODPs in existing ontologies and KGs, which is the main focus of this research work. This thesis analyses the role of ODPs in ontology engineering, through experiences in actual ontology projects, placing this analysis in the context of existing ontology reuse approaches. Moreover, this thesis introduces a novel method for extracting empirical ODPs (EODPs) from ontologies, and a novel method for extracting EODPs from knowledge graphs, whose schemas are implicit. The first method groups the extracted EODPs in clusters: conceptual components. Each conceptual component represents a modelling problem, e.g. representing collections. As EODPs are fragments possibly extracted from different ontologies, some of them will fall in the same cluster, meaning that they are implemented solutions to the same modelling problem. EODPs and conceptual components enable the empirical observation and comparison of modelling solutions to common modelling problems in different ontologies. The second method extracts EODPs from a KG as sets of probabilistic axioms/constraints involving the ontological entities instantiated. These EODPs may support KG inspection and comparison, providing insights on how certain entities are described in a KG. An additional contribution of this thesis is an ontology for annotating ODPs in ontologies and KGs.