20 resultados para strut and tie models
Resumo:
Deformability is often a crucial to the conception of many civil-engineering structural elements. Also, design is all the more burdensome if both long- and short-term deformability has to be considered. In this thesis, long- and short-term deformability has been studied from the material and the structural modelling point of view. Moreover, two materials have been handled: pultruded composites and concrete. A new finite element model for thin-walled beams has been introduced. As a main assumption, cross-sections rigid are considered rigid in their plane; this hypothesis replaces that of the classical beam theory of plane cross-sections in the deformed state. That also allows reducing the total number of degrees of freedom, and therefore making analysis faster compared with twodimensional finite elements. Longitudinal direction warping is left free, allowing describing phenomena such as the shear lag. The new finite-element model has been first applied to concrete thin-walled beams (such as roof high span girders or bridge girders) subject to instantaneous service loadings. Concrete in his cracked state has been considered through a smeared crack model for beams under bending. At a second stage, the FE-model has been extended to the viscoelastic field and applied to pultruded composite beams under sustained loadings. The generalized Maxwell model has been adopted. As far as materials are concerned, long-term creep tests have been carried out on pultruded specimens. Both tension and shear tests have been executed. Some specimen has been strengthened with carbon fibre plies to reduce short- and long- term deformability. Tests have been done in a climate room and specimens kept 2 years under constant load in time. As for concrete, a model for tertiary creep has been proposed. The basic idea is to couple the UMLV linear creep model with a damage model in order to describe nonlinearity. An effective strain tensor, weighting the total and the elasto-damaged strain tensors, controls damage evolution through the damage loading function. Creep strains are related to the effective stresses (defined by damage models) and so associated to the intact material.
Resumo:
The goal of the present research is to define a Semantic Web framework for precedent modelling, by using knowledge extracted from text, metadata, and rules, while maintaining a strong text-to-knowledge morphism between legal text and legal concepts, in order to fill the gap between legal document and its semantics. The framework is composed of four different models that make use of standard languages from the Semantic Web stack of technologies: a document metadata structure, modelling the main parts of a judgement, and creating a bridge between a text and its semantic annotations of legal concepts; a legal core ontology, modelling abstract legal concepts and institutions contained in a rule of law; a legal domain ontology, modelling the main legal concepts in a specific domain concerned by case-law; an argumentation system, modelling the structure of argumentation. The input to the framework includes metadata associated with judicial concepts, and an ontology library representing the structure of case-law. The research relies on the previous efforts of the community in the field of legal knowledge representation and rule interchange for applications in the legal domain, in order to apply the theory to a set of real legal documents, stressing the OWL axioms definitions as much as possible in order to enable them to provide a semantically powerful representation of the legal document and a solid ground for an argumentation system using a defeasible subset of predicate logics. It appears that some new features of OWL2 unlock useful reasoning features for legal knowledge, especially if combined with defeasible rules and argumentation schemes. The main task is thus to formalize legal concepts and argumentation patterns contained in a judgement, with the following requirement: to check, validate and reuse the discourse of a judge - and the argumentation he produces - as expressed by the judicial text.
Resumo:
The mesophotic zone is frequently defined as ranging between 30-40 and 150 m depth. However, these borders are necessarily imprecise due to variations in the penetration of light along the water column related to local factors. Moreover, density of data on mesophotic ecosystems vary along geographical distance, with temperate latitudes largely less explored than tropical situations. This is the case of the Mediterranean Sea, where information on mesophotic ecosystems is largely lower with respect to tropical situations. The lack of a clear definition of the borders of the mesophotic zone may represent a problem when information must be transferred to the policy that requires a coherent spatial definition to plan proper management and conservation measures. The present thesis aims at providing information on the spatial definition of the mesophotic zone in the Mediterranean Sea, its biodiversity and distribution of its ecosystems. The first chapter analyzes information on mesophotic ecosystems in the Mediterranean Sea to identify gaps in the literature and map the mesophotic zone in the Mediterranean Sea using light penetration estimated from satellite data. In the second chapter, different visual techniques to study mesophotic ecosystems are compared to identify the best analytical method to estimate diversity and habitat extension. In the third chapter, a set of Remotely Operated vehicles (ROV) surveys performed on mesophotic assemblages in the Mediterranean Sea are analyzed to describe their taxonomic and functional diversity and environmental factors influencing their structure. A Habitat Suitability Model is run in the fourth chapter to map the distribution of areas suitable for the presence of deep-water oyster reefs in the Adriatic-Ionian area. The fifth chapter explores the mesophotic zone in the northern Gulf of Mexico providing its spatial and vertical extension of the mesophotic zone and information on the diversity associated with mesophotic ecosystems.
Resumo:
Gastrointestinal stromal tumors (GIST) are the most common di tumors of the gastrointestinal tract, arising from the interstitial cells of Cajal (ICCs) or their precursors. The vast majority of GISTs (75–85% of GIST) harbor KIT or PDGFRA mutations. A small percentage of GIST (about 10‐15%) do not harbor any of these driver mutations and have historically been called wild-type (WT). Among them, from 20% to 40% show loss of function of the succinate dehydrogenase complex (SDH), also defined as SDH‐deficient GIST. SDH-deficient GISTs display distinctive clinical and pathological features, and can be sporadic or associated with Carney triad or Carney-Stratakis syndrome. These tumors arise most frequently in the stomach with predilection to distal stomach and antrum, have a multi-nodular growth, display a histological epithelioid phenotype, and present frequent lympho-vascular invasion. Occurrence of lymph node metastases and indolent course are representative features of SDH-deficient GISTs. This subset of GIST is known for the immunohistochemical loss of succinate dehydrogenase subunit B (SDHB), which signals the loss of function of the entire SDH-complex. The overall aim of my PhD project consists of the comprehensive characterization of SDH deficient GIST. Throughout the project, clinical, molecular and cellular characterizations were performed using next-generation sequencing technologies (NGS), that has the potential to allow the identification of molecular patterns useful for the diagnosis and development of novel treatments. Moreover, while there are many different cell lines and preclinical models of KIT/PDGFRA mutant GIST, no reliable cell model of SDH-deficient GIST has currently been developed, which could be used for studies on tumor evolution and in vitro assessments of drug response. Therefore, another aim of this project was to develop a pre-clinical model of SDH deficient GIST using the novel technology of induced pluripotent stem cells (iPSC).
Resumo:
Inverse problems are at the core of many challenging applications. Variational and learning models provide estimated solutions of inverse problems as the outcome of specific reconstruction maps. In the variational approach, the result of the reconstruction map is the solution of a regularized minimization problem encoding information on the acquisition process and prior knowledge on the solution. In the learning approach, the reconstruction map is a parametric function whose parameters are identified by solving a minimization problem depending on a large set of data. In this thesis, we go beyond this apparent dichotomy between variational and learning models and we show they can be harmoniously merged in unified hybrid frameworks preserving their main advantages. We develop several highly efficient methods based on both these model-driven and data-driven strategies, for which we provide a detailed convergence analysis. The arising algorithms are applied to solve inverse problems involving images and time series. For each task, we show the proposed schemes improve the performances of many other existing methods in terms of both computational burden and quality of the solution. In the first part, we focus on gradient-based regularized variational models which are shown to be effective for segmentation purposes and thermal and medical image enhancement. We consider gradient sparsity-promoting regularized models for which we develop different strategies to estimate the regularization strength. Furthermore, we introduce a novel gradient-based Plug-and-Play convergent scheme considering a deep learning based denoiser trained on the gradient domain. In the second part, we address the tasks of natural image deblurring, image and video super resolution microscopy and positioning time series prediction, through deep learning based methods. We boost the performances of supervised, such as trained convolutional and recurrent networks, and unsupervised deep learning strategies, such as Deep Image Prior, by penalizing the losses with handcrafted regularization terms.
Resumo:
Bioelectronic interfaces have significantly advanced in recent years, offering potential treatments for vision impairments, spinal cord injuries, and neurodegenerative diseases. However, the classical neurocentric vision drives the technological development toward neurons. Emerging evidence highlights the critical role of glial cells in the nervous system. Among them, astrocytes significantly influence neuronal networks throughout life and are implicated in several neuropathological states. Although they are incapable to fire action potentials, astrocytes communicate through diverse calcium (Ca2+) signalling pathways, crucial for cognitive functions and brain blood flow regulation. Current bioelectronic devices are primarily designed to interface neurons and are unsuitable for studying astrocytes. Graphene, with its unique electrical, mechanical and biocompatibility properties, has emerged as a promising neural interface material. However, its use as electrode interface to modulate astrocyte functionality remains unexplored. The aim of this PhD work was to exploit Graphene-oxide (GO) and reduced GO (rGO)-coated electrodes to control Ca2+ signalling in astrocytes by electrical stimulation. We discovered that distinct Ca2+dynamics in astrocytes can be evoked, in vitro and in brain slices, depending on the conductive/insulating properties of rGO/GO electrodes. Stimulation by rGO electrodes induces intracellular Ca2+ response with sharp peaks of oscillations (“P-type”), exclusively due to Ca2+ release from intracellular stores. Conversely, astrocytes stimulated by GO electrodes show slower and sustained Ca2+ response (“S-type”), largely mediated by external Ca2+ influx through specific ion channels. Astrocytes respond faster than neurons and activate distinct G-Protein Coupled Receptor intracellular signalling pathways. We propose a resistive/insulating model, hypothesizing that the different conductivity of the substrate influences the electric field at the cell/electrolyte or cell/material interfaces, favouring, respectively, the Ca2+ release from intracellular stores or the extracellular Ca2+ influx. This research provides a simple tool to selectively control distinct Ca2+ signals in brain astrocytes in neuroscience and bioelectronic medicine.
Resumo:
The topic of my Ph.D. thesis is the finite element modeling of coseismic deformation imaged by DInSAR and GPS data. I developed a method to calculate synthetic Green functions with finite element models (FEMs) and then use linear inversion methods to determine the slip distribution on the fault plane. The method is applied to the 2009 L’Aquila Earthquake (Italy) and to the 2008 Wenchuan earthquake (China). I focus on the influence of rheological features of the earth's crust by implementing seismic tomographic data and the influence of topography by implementing Digital Elevation Models (DEM) layers on the FEMs. Results for the L’Aquila earthquake highlight the non-negligible influence of the medium structure: homogeneous and heterogeneous models show discrepancies up to 20% in the fault slip distribution values. Furthermore, in the heterogeneous models a new area of slip appears above the hypocenter. Regarding the 2008 Wenchuan earthquake, the very steep topographic relief of Longmen Shan Range is implemented in my FE model. A large number of DEM layers corresponding to East China is used to achieve the complete coverage of the FE model. My objective was to explore the influence of the topography on the retrieved coseismic slip distribution. The inversion results reveals significant differences between the flat and topographic model. Thus, the flat models frequently adopted are inappropriate to represent the earth surface topographic features and especially in the case of the 2008 Wenchuan earthquake.
Resumo:
In this thesis, we investigate the role of applied physics in epidemiological surveillance through the application of mathematical models, network science and machine learning. The spread of a communicable disease depends on many biological, social, and health factors. The large masses of data available make it possible, on the one hand, to monitor the evolution and spread of pathogenic organisms; on the other hand, to study the behavior of people, their opinions and habits. Presented here are three lines of research in which an attempt was made to solve real epidemiological problems through data analysis and the use of statistical and mathematical models. In Chapter 1, we applied language-inspired Deep Learning models to transform influenza protein sequences into vectors encoding their information content. We then attempted to reconstruct the antigenic properties of different viral strains using regression models and to identify the mutations responsible for vaccine escape. In Chapter 2, we constructed a compartmental model to describe the spread of a bacterium within a hospital ward. The model was informed and validated on time series of clinical measurements, and a sensitivity analysis was used to assess the impact of different control measures. Finally (Chapter 3) we reconstructed the network of retweets among COVID-19 themed Twitter users in the early months of the SARS-CoV-2 pandemic. By means of community detection algorithms and centrality measures, we characterized users’ attention shifts in the network, showing that scientific communities, initially the most retweeted, lost influence over time to national political communities. In the Conclusion, we highlighted the importance of the work done in light of the main contemporary challenges for epidemiological surveillance. In particular, we present reflections on the importance of nowcasting and forecasting, the relationship between data and scientific research, and the need to unite the different scales of epidemiological surveillance.
Resumo:
The hydrologic risk (and the hydro-geologic one, closely related to it) is, and has always been, a very relevant issue, due to the severe consequences that may be provoked by a flooding or by waters in general in terms of human and economic losses. Floods are natural phenomena, often catastrophic, and cannot be avoided, but their damages can be reduced if they are predicted sufficiently in advance. For this reason, the flood forecasting plays an essential role in the hydro-geological and hydrological risk prevention. Thanks to the development of sophisticated meteorological, hydrologic and hydraulic models, in recent decades the flood forecasting has made a significant progress, nonetheless, models are imperfect, which means that we are still left with a residual uncertainty on what will actually happen. In this thesis, this type of uncertainty is what will be discussed and analyzed. In operational problems, it is possible to affirm that the ultimate aim of forecasting systems is not to reproduce the river behavior, but this is only a means through which reducing the uncertainty associated to what will happen as a consequence of a precipitation event. In other words, the main objective is to assess whether or not preventive interventions should be adopted and which operational strategy may represent the best option. The main problem for a decision maker is to interpret model results and translate them into an effective intervention strategy. To make this possible, it is necessary to clearly define what is meant by uncertainty, since in the literature confusion is often made on this issue. Therefore, the first objective of this thesis is to clarify this concept, starting with a key question: should be the choice of the intervention strategy to adopt based on the evaluation of the model prediction based on its ability to represent the reality or on the evaluation of what actually will happen on the basis of the information given by the model forecast? Once the previous idea is made unambiguous, the other main concern of this work is to develope a tool that can provide an effective decision support, making possible doing objective and realistic risk evaluations. In particular, such tool should be able to provide an uncertainty assessment as accurate as possible. This means primarily three things: it must be able to correctly combine all the available deterministic forecasts, it must assess the probability distribution of the predicted quantity and it must quantify the flooding probability. Furthermore, given that the time to implement prevention strategies is often limited, the flooding probability will have to be linked to the time of occurrence. For this reason, it is necessary to quantify the flooding probability within a horizon time related to that required to implement the intervention strategy and it is also necessary to assess the probability of the flooding time.
Resumo:
In 'Involutory reflection groups and their models' (F. Caselli, 2010), a uniform Gelfand model is constructed for all complex reflection groups G(r,p,n) satisfying GCD(p,n)=1,2 and for all their quotients modulo a scalar subgroup. The present work provides a refinement for this model. The final decomposition obtained is compatible with the Robinson-Schensted generalized correspondence.
Resumo:
The modern stratigraphy of clastic continental margins is the result of the interaction between several geological processes acting on different time scales, among which sea level oscillations, sediment supply fluctuations and local tectonics are the main mechanisms. During the past three years my PhD was focused on understanding the impact of each of these process in the deposition of the central and northern Adriatic sedimentary successions, with the aim of reconstructing and quantifying the Late Quaternary eustatic fluctuations. In the last few decades, several Authors tried to quantify past eustatic fluctuations through the analysis of direct sea level indicators, among which drowned barrier-island deposits or coral reefs, or indirect methods, such as Oxygen isotope ratios (δ18O) or modeling simulations. Sea level curves, obtained from direct sea level indicators, record a composite signal, formed by the contribution of the global eustatic change and regional factors, as tectonic processes or glacial-isostatic rebound effects: the eustatic signal has to be obtained by removing the contribution of these other mechanisms. To obtain the most realistic sea level reconstructions it is important to quantify the tectonic regime of the central Adriatic margin. This result has been achieved integrating a numerical approach with the analysis of high-resolution seismic profiles. In detail, the subsidence trend obtained from the geohistory analysis and the backstripping of the borehole PRAD1.2 (the borehole PRAD1.2 is a 71 m continuous borehole drilled in -185 m of water depth, south of the Mid Adriatic Deep - MAD - during the European Project PROMESS 1, Profile Across Mediterranean Sedimentary Systems, Part 1), has been confirmed by the analysis of lowstand paleoshorelines and by benthic foraminifera associations investigated through the borehole. This work showed an evolution from inner-shelf environment, during Marine Isotopic Stage (MIS) 10, to upper-slope conditions, during MIS 2. Once the tectonic regime of the central Adriatic margin has been constrained, it is possible to investigate the impact of sea level and sediment supply fluctuations on the deposition of the Late Pleistocene-Holocene transgressive deposits. The Adriatic transgressive record (TST - Transgressive Systems Tract) is formed by three correlative sedimentary bodies, deposited in less then 14 kyr since the Last Glacial Maximum (LGM); in particular: along the central Adriatic shelf and in the adjacent slope basin the TST is formed by marine units, while along the northern Adriatic shelf the TST is represented by costal deposits in a backstepping configuration. The central Adriatic margin, characterized by a thick transgressive sedimentary succession, is the ideal site to investigate the impact of late Pleistocene climatic and eustatic fluctuations, among which Meltwater Pulses 1A and 1B and the Younger Dryas cold event. The central Adriatic TST is formed by a tripartite deposit bounded by two regional unconformities. In particular, the middle TST unit includes two prograding wedges, deposited in the interval between the two Meltwater Pulse events, as highlighted by several 14C age estimates, and likely recorded the Younger Dryas cold interval. Modeling simulations, obtained with the two coupled models HydroTrend 3.0 and 2D-Sedflux 1.0C (developed by the Community Surface Dynamics Modeling System - CSDMS), integrated by the analysis of high resolution seismic profiles and core samples, indicate that: 1 - the prograding middle TST unit, deposited during the Younger Dryas, was formed as a consequence of an increase in sediment flux, likely connected to a decline in vegetation cover in the catchment area due to the establishment of sub glacial arid conditions; 2 - the two-stage prograding geometry was the consequence of a sea level still-stand (or possibly a fall) during the Younger Dryas event. The northern Adriatic margin, characterized by a broad and gentle shelf (350 km wide with a low angle plunge of 0.02° to the SE), is the ideal site to quantify the timing of each steps of the post LGM sea level rise. The modern shelf is characterized by sandy deposits of barrier-island systems in a backstepping configuration, showing younger ages at progressively shallower depths, which recorded the step-wise nature of the last sea level rise. The age-depth model, obtained by dated samples of basal peat layers, is in good agreement with previous published sea level curves, and highlights the post-glacial eustatic trend. The interval corresponding to the Younger Dyas cold reversal, instead, is more complex: two coeval coastal deposits characterize the northern Adriatic shelf at very different water depths. Several explanations and different models can be attempted to explain this conundrum, but the problem remains still unsolved.
Resumo:
The primary objective of this thesis is to obtain a better understanding of the 3D velocity structure of the lithosphere in central Italy. To this end, I adopted the Spectral-Element Method to perform accurate numerical simulations of the complex wavefields generated by the 2009 Mw 6.3 L’Aquila event and by its foreshocks and aftershocks together with some additional events within our target region. For the mainshock, the source was represented by a finite fault and different models for central Italy, both 1D and 3D, were tested. Surface topography, attenuation and Moho discontinuity were also accounted for. Three-component synthetic waveforms were compared to the corresponding recorded data. The results of these analyses show that 3D models, including all the known structural heterogeneities in the region, are essential to accurately reproduce waveform propagation. They allow to capture features of the seismograms, mainly related to topography or to low wavespeed areas, and, combined with a finite fault model, result into a favorable match between data and synthetics for frequencies up to ~0.5 Hz. We also obtained peak ground velocity maps, that provide valuable information for seismic hazard assessment. The remaining differences between data and synthetics led us to take advantage of SEM combined with an adjoint method to iteratively improve the available 3D structure model for central Italy. A total of 63 events and 52 stations in the region were considered. We performed five iterations of the tomographic inversion, by calculating the misfit function gradient - necessary for the model update - from adjoint sensitivity kernels, constructed using only two simulations for each event. Our last updated model features a reduced traveltime misfit function and improved agreement between data and synthetics, although further iterations, as well as refined source solutions, are necessary to obtain a new reference 3D model for central Italy tomography.
Resumo:
The aim of this Thesis is to investigate the effect of heterogeneities within the subducting plate on the dynamics of subduction. In particular, I study the motion of the trench for oceanic and continental subduction, first, separately, and, then, together in the same system to understand how they interact. The understanding of these features is fundamental to reconstruct the evolution of complex subduction zones, such as the Central Mediterranean. For this purpose, I developed 2D and 3D numerical models of oceanic and continental subduction where the rheological, geometrical and compositional properties of the plates are varied. In these models, the trench and the overriding plate move self-consistently as a function of the dynamics of the system. The effect of continental subduction on trench migration is largely investigated. Results from a parametric study showed that despite different rheological properties of the plates, all models with a uniform continental crust share the same kinematic behaviour: the trench starts to advance once the continent arrives at the subduction zone. Hence, the advancing mode in continental collision scenarios is at least partly driven by an intrinsic feature of the system. Moreover, the presence of a weak lower crust within the continental plate can lead to the occurrence of delamination. Indeed, by changing the viscosity of the lower crust, both delamination and slab detachment can occur. Delamination is favoured by a low viscosity value of the lower crust, because this makes the mechanical decoupling easier between crust and lithospheric mantle. These features are observed both in 2D and 3D models, but the numerical results of the 3D models also showed that the rheology of the continental crust has a very strong effect on the dynamics of the whole system, since it influences not only the continental part of plate but also the oceanic sides.
Resumo:
BTES (borehole thermal energy storage)systems exchange thermal energy by conduction with the surrounding ground through borehole materials. The spatial variability of the geological properties and the space-time variability of hydrogeological conditions affect the real power rate of heat exchangers and, consequently, the amount of energy extracted from / injected into the ground. For this reason, it is not an easy task to identify the underground thermal properties to use when designing. At the current state of technology, Thermal Response Test (TRT) is the in situ test for the characterization of ground thermal properties with the higher degree of accuracy, but it doesn’t fully solve the problem of characterizing the thermal properties of a shallow geothermal reservoir, simply because it characterizes only the neighborhood of the heat exchanger at hand and only for the test duration. Different analytical and numerical models exist for the characterization of shallow geothermal reservoir, but they are still inadequate and not exhaustive: more sophisticated models must be taken into account and a geostatistical approach is needed to tackle natural variability and estimates uncertainty. The approach adopted for reservoir characterization is the “inverse problem”, typical of oil&gas field analysis. Similarly, we create different realizations of thermal properties by direct sequential simulation and we find the best one fitting real production data (fluid temperature along time). The software used to develop heat production simulation is FEFLOW 5.4 (Finite Element subsurface FLOW system). A geostatistical reservoir model has been set up based on literature thermal properties data and spatial variability hypotheses, and a real TRT has been tested. Then we analyzed and used as well two other codes (SA-Geotherm and FV-Geotherm) which are two implementation of the same numerical model of FEFLOW (Al-Khoury model).