14 resultados para computational model

em AMS Tesi di Laurea - Alm@DL - Università di Bologna


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The mechanical action of the heart is made possible in response to electrical events that involve the cardiac cells, a property that classifies the heart tissue between the excitable tissues. At the cellular level, the electrical event is the signal that triggers the mechanical contraction, inducing a transient increase in intracellular calcium which, in turn, carries the message of contraction to the contractile proteins of the cell. The primary goal of my project was to implement in CUDA (Compute Unified Device Architecture, an hardware architecture for parallel processing created by NVIDIA) a tissue model of the rabbit sinoatrial node to evaluate the heterogeneity of its structure and how that variability influences the behavior of the cells. In particular, each cell has an intrinsic discharge frequency, thus different from that of every other cell of the tissue and it is interesting to study the process of synchronization of the cells and look at the value of the last discharge frequency if they synchronized.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Recent experiments have revealed the fundamental importance of neuromodulatory action on activity-dependent synaptic plasticity underlying behavioral learning and spatial memory formation. Neuromodulators affect synaptic plasticity through the modification of the dynamics of receptors on the synaptic membrane. However, chemical substances other than neuromodulators, such as receptors co-agonists, can influence the receptors' dynamics and thus participate in determining plasticity. Here we focus on D-serine, which has been observed to affect the activity thresholds of synaptic plasticity by co-activating NMDA receptors. We use a computational model for spatial value learning with plasticity between two place cell layers. The D-serine release is CB1R mediated and the model reproduces the impairment of spatial memory due to the astrocytic CB1R knockout for a mouse navigating in the Morris water maze. The addition of path-constraining obstacles shows how performance impairment depends on the environment's topology. The model can explain the experimental evidence and produce useful testable predictions to increase our understanding of the complex mechanisms underlying learning.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La distorsione della percezione della distanza tra due stimoli puntuali applicati sulla superfice della pelle di diverse regioni corporee è conosciuta come Illusione di Weber. Questa illusione è stata osservata, e verificata, in molti esperimenti in cui ai soggetti era chiesto di giudicare la distanza tra due stimoli applicati sulla superficie della pelle di differenti parti corporee. Da tali esperimenti si è dedotto che una stessa distanza tra gli stimoli è giudicata differentemente per diverse regioni corporee. Il concetto secondo cui la distanza sulla pelle è spesso percepita in maniera alterata è ampiamente condiviso, ma i meccanismi neurali che manovrano questa illusione sono, allo stesso tempo, ancora ampiamente sconosciuti. In particolare, non è ancora chiaro come sia interpretata la distanza tra due stimoli puntuali simultanei, e quali aree celebrali siano coinvolte in questa elaborazione. L’illusione di Weber può essere spiegata, in parte, considerando la differenza in termini di densità meccano-recettoriale delle differenti regioni corporee, e l’immagine distorta del nostro corpo che risiede nella Corteccia Primaria Somato-Sensoriale (homunculus). Tuttavia, questi meccanismi sembrano non sufficienti a spiegare il fenomeno osservato: infatti, secondo i risultati derivanti da 100 anni di sperimentazioni, le distorsioni effettive nel giudizio delle distanze sono molto più piccole rispetto alle distorsioni che la Corteccia Primaria suggerisce. In altre parole, l’illusione osservata negli esperimenti tattili è molto più piccola rispetto all’effetto prodotto dalla differente densità recettoriale che affligge le diverse parti del corpo, o dall’estensione corticale. Ciò, ha portato a ipotizzare che la percezione della distanza tattile richieda la presenza di un’ulteriore area celebrale, e di ulteriori meccanismi che operino allo scopo di ridimensionare – almeno parzialmente – le informazioni derivanti dalla corteccia primaria, in modo da mantenere una certa costanza nella percezione della distanza tattile lungo la superfice corporea. E’ stata così proposta la presenza di una sorta di “processo di ridimensionamento”, chiamato “Rescaling Process” che opera per ridurre questa illusione verso una percezione più verosimile. Il verificarsi di questo processo è sostenuto da molti ricercatori in ambito neuro scientifico; in particolare, dal Dr. Matthew Longo, neuro scienziato del Department of Psychological Sciences (Birkbeck University of London), le cui ricerche sulla percezione della distanza tattile e sulla rappresentazione corporea sembrano confermare questa ipotesi. Tuttavia, i meccanismi neurali, e i circuiti che stanno alla base di questo potenziale “Rescaling Process” sono ancora ampiamente sconosciuti. Lo scopo di questa tesi è stato quello di chiarire la possibile organizzazione della rete, e i meccanismi neurali che scatenano l’illusione di Weber e il “Rescaling Process”, usando un modello di rete neurale. La maggior parte del lavoro è stata svolta nel Dipartimento di Scienze Psicologiche della Birkbeck University of London, sotto la supervisione del Dott. M. Longo, il quale ha contribuito principalmente all’interpretazione dei risultati del modello, dando suggerimenti sull’elaborazione dei risultati in modo da ottenere un’informazione più chiara; inoltre egli ha fornito utili direttive per la validazione dei risultati durante l’implementazione di test statistici. Per replicare l’illusione di Weber ed il “Rescaling Proess”, la rete neurale è stata organizzata con due strati principali di neuroni corrispondenti a due differenti aree funzionali corticali: • Primo strato di neuroni (il quale dà il via ad una prima elaborazione degli stimoli esterni): questo strato può essere pensato come parte della Corteccia Primaria Somato-Sensoriale affetta da Magnificazione Corticale (homunculus). • Secondo strato di neuroni (successiva elaborazione delle informazioni provenienti dal primo strato): questo strato può rappresentare un’Area Corticale più elevata coinvolta nell’implementazione del “Rescaling Process”. Le reti neurali sono state costruite includendo connessioni sinaptiche all’interno di ogni strato (Sinapsi Laterali), e connessioni sinaptiche tra i due strati neurali (Sinapsi Feed-Forward), assumendo inoltre che l’attività di ogni neurone dipenda dal suo input attraverso una relazione sigmoidale statica, cosi come da una dinamica del primo ordine. In particolare, usando la struttura appena descritta, sono state implementate due differenti reti neurali, per due differenti regioni corporee (per esempio, Mano e Braccio), caratterizzate da differente risoluzione tattile e differente Magnificazione Corticale, in modo da replicare l’Illusione di Weber ed il “Rescaling Process”. Questi modelli possono aiutare a comprendere il meccanismo dell’illusione di Weber e dare così una possibile spiegazione al “Rescaling Process”. Inoltre, le reti neurali implementate forniscono un valido contributo per la comprensione della strategia adottata dal cervello nell’interpretazione della distanza sulla superficie della pelle. Oltre allo scopo di comprensione, tali modelli potrebbero essere impiegati altresì per formulare predizioni che potranno poi essere verificate in seguito, in vivo, su soggetti reali attraverso esperimenti di percezione tattile. E’ importante sottolineare che i modelli implementati sono da considerarsi prettamente come modelli funzionali e non intendono replicare dettagli fisiologici ed anatomici. I principali risultati ottenuti tramite questi modelli sono la riproduzione del fenomeno della “Weber’s Illusion” per due differenti regioni corporee, Mano e Braccio, come riportato nei tanti articoli riguardanti le illusioni tattili (per esempio “The perception of distance and location for dual tactile pressures” di Barry G. Green). L’illusione di Weber è stata registrata attraverso l’output delle reti neurali, e poi rappresentata graficamente, cercando di spiegare le ragioni di tali risultati.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

L’interazione che abbiamo con l’ambiente che ci circonda dipende sia da diverse tipologie di stimoli esterni che percepiamo (tattili, visivi, acustici, ecc.) sia dalla loro elaborazione per opera del nostro sistema nervoso. A volte però, l’integrazione e l’elaborazione di tali input possono causare effetti d’illusione. Ciò si presenta, ad esempio, nella percezione tattile. Infatti, la percezione di distanze tattili varia al variare della regione corporea considerata. Il concetto che distanze sulla cute siano frequentemente erroneamente percepite, è stato scoperto circa un secolo fa da Weber. In particolare, una determinata distanza fisica, è percepita maggiore su parti del corpo che presentano una più alta densità di meccanocettori rispetto a distanze applicate su parti del corpo con inferiore densità. Oltre a questa illusione, un importante fenomeno osservato in vivo è rappresentato dal fatto che la percezione della distanza tattile dipende dall’orientazione degli stimoli applicati sulla cute. In sostanza, la distanza percepita su una regione cutanea varia al variare dell’orientazione degli stimoli applicati. Recentemente, Longo e Haggard (Longo & Haggard, J.Exp.Psychol. Hum Percept Perform 37: 720-726, 2011), allo scopo di investigare come sia rappresentato il nostro corpo all’interno del nostro cervello, hanno messo a confronto distanze tattili a diverse orientazioni sulla mano deducendo che la distanza fra due stimoli puntuali è percepita maggiore se applicata trasversalmente sulla mano anziché longitudinalmente. Tale illusione è nota con il nome di Illusione Tattile Orientazione-Dipendente e diversi risultati riportati in letteratura dimostrano che tale illusione dipende dalla distanza che intercorre fra i due stimoli puntuali sulla cute. Infatti, Green riporta in un suo articolo (Green, Percpept Pshycophys 31, 315-323, 1982) il fatto che maggiore sia la distanza applicata e maggiore risulterà l’effetto illusivo che si presenta. L’illusione di Weber e l’illusione tattile orientazione-dipendente sono spiegate in letteratura considerando differenze riguardanti la densità di recettori, gli effetti di magnificazione corticale a livello della corteccia primaria somatosensoriale (regioni della corteccia somatosensoriale, di dimensioni differenti, sono adibite a diverse regioni corporee) e differenze nella dimensione e forma dei campi recettivi. Tuttavia tali effetti di illusione risultano molto meno rilevanti rispetto a quelli che ci si aspetta semplicemente considerando i meccanismi fisiologici, elencati in precedenza, che li causano. Ciò suggerisce che l’informazione tattile elaborata a livello della corteccia primaria somatosensoriale, riceva successivi step di elaborazione in aree corticali di più alto livello. Esse agiscono allo scopo di ridurre il divario fra distanza percepita trasversalmente e distanza percepita longitudinalmente, rendendole più simili tra loro. Tale processo assume il nome di “Rescaling Process”. I meccanismi neurali che operano nel cervello allo scopo di garantire Rescaling Process restano ancora largamente sconosciuti. Perciò, lo scopo del mio progetto di tesi è stato quello di realizzare un modello di rete neurale che simulasse gli aspetti riguardanti la percezione tattile, l’illusione orientazione-dipendente e il processo di rescaling avanzando possibili ipotesi circa i meccanismi neurali che concorrono alla loro realizzazione. Il modello computazionale si compone di due diversi layers neurali che processano l’informazione tattile. Uno di questi rappresenta un’area corticale di più basso livello (chiamata Area1) nella quale una prima e distorta rappresentazione tattile è realizzata. Per questo, tale layer potrebbe rappresentare un’area della corteccia primaria somatosensoriale, dove la rappresentazione della distanza tattile è significativamente distorta a causa dell’anisotropia dei campi recettivi e della magnificazione corticale. Il secondo layer (chiamato Area2) rappresenta un’area di più alto livello che riceve le informazioni tattili dal primo e ne riduce la loro distorsione mediante Rescaling Process. Questo layer potrebbe rappresentare aree corticali superiori (ad esempio la corteccia parietale o quella temporale) adibite anch’esse alla percezione di distanze tattili ed implicate nel Rescaling Process. Nel modello, i neuroni in Area1 ricevono informazioni dagli stimoli esterni (applicati sulla cute) inviando quindi informazioni ai neuroni in Area2 mediante sinapsi Feed-forward eccitatorie. Di fatto, neuroni appartenenti ad uno stesso layer comunicano fra loro attraverso sinapsi laterali aventi una forma a cappello Messicano. E’ importante affermare che la rete neurale implementata è principalmente un modello concettuale che non si preme di fornire un’accurata riproduzione delle strutture fisiologiche ed anatomiche. Per questo occorre considerare un livello astratto di implementazione senza specificare un’esatta corrispondenza tra layers nel modello e regioni anatomiche presenti nel cervello. Tuttavia, i meccanismi inclusi nel modello sono biologicamente plausibili. Dunque la rete neurale può essere utile per una migliore comprensione dei molteplici meccanismi agenti nel nostro cervello, allo scopo di elaborare diversi input tattili. Infatti, il modello è in grado di riprodurre diversi risultati riportati negli articoli di Green e Longo & Haggard.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In a world focused on the need to produce energy for a growing population, while reducing atmospheric emissions of carbon dioxide, organic Rankine cycles represent a solution to fulfil this goal. This study focuses on the design and optimization of axial-flow turbines for organic Rankine cycles. From the turbine designer point of view, most of this fluids exhibit some peculiar characteristics, such as small enthalpy drop, low speed of sound, large expansion ratio. A computational model for the prediction of axial-flow turbine performance is developed and validated against experimental data. The model allows to calculate turbine performance within a range of accuracy of ±3%. The design procedure is coupled with an optimization process, performed using a genetic algorithm where the turbine total-to-static efficiency represents the objective function. The computational model is integrated in a wider analysis of thermodynamic cycle units, by providing the turbine optimal design. First, the calculation routine is applied in the context of the Draugen offshore platform, where three heat recovery systems are compared. The turbine performance is investigated for three competing bottoming cycles: organic Rankine cycle (operating cyclopentane), steam Rankine cycle and air bottoming cycle. Findings indicate the air turbine as the most efficient solution (total-to-static efficiency = 0.89), while the cyclopentane turbine results as the most flexible and compact technology (2.45 ton/MW and 0.63 m3/MW). Furthermore, the study shows that, for organic and steam Rankine cycles, the optimal design configurations for the expanders do not coincide with those of the thermodynamic cycles. This suggests the possibility to obtain a more accurate analysis by including the computational model in the simulations of the thermodynamic cycles. Afterwards, the performance analysis is carried out by comparing three organic fluids: cyclopentane, MDM and R245fa. Results suggest MDM as the most effective fluid from the turbine performance viewpoint (total-to-total efficiency = 0.89). On the other hand, cyclopentane guarantees a greater net power output of the organic Rankine cycle (P = 5.35 MW), while R245fa represents the most compact solution (1.63 ton/MW and 0.20 m3/MW). Finally, the influence of the composition of an isopentane/isobutane mixture on both the thermodynamic cycle performance and the expander isentropic efficiency is investigated. Findings show how the mixture composition affects the turbine efficiency and so the cycle performance. Moreover, the analysis demonstrates that the use of binary mixtures leads to an enhancement of the thermodynamic cycle performance.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Resolution of multisensory deficits has been observed in teenagers with Autism Spectrum Disorders (ASD) for complex, social speech stimuli; this resolution extends to more basic multisensory processing, involving low-level stimuli. In particular, a delayed transition of multisensory integration (MSI) from a default state of competition to one of facilitation has been observed in ASD children. In other terms, the complete maturation of MSI is achieved later in ASD. In the present study a neuro-computational model is used to reproduce some patterns of behavior observed experimentally, modeling a bisensory reaction time task, in which auditory and visual stimuli are presented in random sequence alone (A or V) or together (AV). The model explains how the default competitive state can be implemented via mutual inhibition between primary sensory areas, and how the shift toward the classical multisensory facilitation, observed in adults, is the result of inhibitory cross-modal connections becoming excitatory during the development. Model results are consistent with a stronger cross-modal inhibition in ASD children, compared to normotypical (NT) ones, suggesting that the transition toward a cooperative interaction between sensory modalities takes longer to occur. Interestingly, the model also predicts the difference between unisensory switch trials (in which sensory modality switches) and unisensory repeat trials (in which sensory modality repeats). This is due to an inhibitory mechanism, characterized by a slow dynamics, driven by the preceding stimulus and inhibiting the processing of the incoming one, when of the opposite sensory modality. These findings link the cognitive framework delineated by the empirical results to a plausible neural implementation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sudden cardiac death due to ventricular arrhythmia is one of the leading causes of mortality in the world. In the last decades, it has proven that anti-arrhythmic drugs, which prolong the refractory period by means of prolongation of the cardiac action potential duration (APD), play a good role in preventing of relevant human arrhythmias. However, it has long been observed that the “class III antiarrhythmic effect” diminish at faster heart rates and that this phenomenon represent a big weakness, since it is the precise situation when arrhythmias are most prone to occur. It is well known that mathematical modeling is a useful tool for investigating cardiac cell behavior. In the last 60 years, a multitude of cardiac models has been created; from the pioneering work of Hodgkin and Huxley (1952), who first described the ionic currents of the squid giant axon quantitatively, mathematical modeling has made great strides. The O’Hara model, that I employed in this research work, is one of the modern computational models of ventricular myocyte, a new generation began in 1991 with ventricular cell model by Noble et al. Successful of these models is that you can generate novel predictions, suggest experiments and provide a quantitative understanding of underlying mechanism. Obviously, the drawback is that they remain simple models, they don’t represent the real system. The overall goal of this research is to give an additional tool, through mathematical modeling, to understand the behavior of the main ionic currents involved during the action potential (AP), especially underlining the differences between slower and faster heart rates. In particular to evaluate the rate-dependence role on the action potential duration, to implement a new method for interpreting ionic currents behavior after a perturbation effect and to verify the validity of the work proposed by Antonio Zaza using an injected current as a perturbing effect.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The cellular basis of cardiac pacemaking activity, and specifically the quantitative contributions of particular mechanisms, is still debated. Reliable computational models of sinoatrial nodal (SAN) cells may provide mechanistic insights, but competing models are built from different data sets and with different underlying assumptions. To understand quantitative differences between alternative models, we performed thorough parameter sensitivity analyses of the SAN models of Maltsev & Lakatta (2009) and Severi et al (2012). Model parameters were randomized to generate a population of cell models with different properties, simulations performed with each set of random parameters generated 14 quantitative outputs that characterized cellular activity, and regression methods were used to analyze the population behavior. Clear differences between the two models were observed at every step of the analysis. Specifically: (1) SR Ca2+ pump activity had a greater effect on SAN cell cycle length (CL) in the Maltsev model; (2) conversely, parameters describing the funny current (If) had a greater effect on CL in the Severi model; (3) changes in rapid delayed rectifier conductance (GKr) had opposite effects on action potential amplitude in the two models; (4) within the population, a greater percentage of model cells failed to exhibit action potentials in the Maltsev model (27%) compared with the Severi model (7%), implying greater robustness in the latter; (5) confirming this initial impression, bifurcation analyses indicated that smaller relative changes in GKr or Na+-K+ pump activity led to failed action potentials in the Maltsev model. Overall, the results suggest experimental tests that can distinguish between models and alternative hypotheses, and the analysis offers strategies for developing anti-arrhythmic pharmaceuticals by predicting their effect on the pacemaking activity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Biodiesel represents a possible substitute to the fossil fuels; for this reason a good comprehension of the kinetics involved is important. Due to the complexity of the biodiesel mixture a common practice is the use of surrogate molecules to study its reactivity. In this work are presented the experimental and computational results obtained for the oxidation and pyrolysis of methane and methyl formate conducted in a plug flow reactor. The work was divided into two parts: the first one was the setup assembly whilst, in the second one, was realized a comparison between the experimental and model results; these last was obtained using models available in literature. It was started studying the methane since, a validate model was available, in this way was possible to verify the reliability of the experimental results. After this first study the attention was focused on the methyl formate investigation. All the analysis were conducted at different temperatures, pressures and, for the oxidation, at different equivalence ratios. The results shown that, a good comprehension of the kinetics is reach but efforts are necessary to better evaluate kinetics parameters such as activation energy. The results even point out that the realized setup is adapt to study the oxidation and pyrolysis and, for this reason, it will be employed to study a longer chain esters with the aim to better understand the kinetic of the molecules that are part of the biodiesel mixture.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A study of the pyrolysis and oxidation (phi 0.5-1-2) of methane and methyl formate (phi 0.5) in a laboratory flow reactor (Length = 50 cm, inner diameter = 2.5 cm) has been carried out at 1-4 atm and 300-1300 K temperature range. Exhaust gaseous species analysis was realized using a gas chromatographic system, Varian CP-4900 PRO Mirco-GC, with a TCD detector and using helium as carrier for a Molecular Sieve 5Å column and nitrogen for a COX column, whose temperatures and pressures were respectively of 65°C and 150kPa. Model simulations using NTUA [1], Fisher et al. [12], Grana [13] and Dooley [14] kinetic mechanisms have been performed with CHEMKIN. The work provides a basis for further development and optimization of existing detailed chemical kinetic schemes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Osteoporosis is one of the major causes of mortality among the elderly. Nowadays, areal bone mineral density (aBMD) is used as diagnostic criteria for osteoporosis; however, this is a moderate predictor of the femur fracture risk and does not capture the effect of some anatomical and physiological properties on the bone strength estimation. Data from past research suggest that most fragility femur fractures occur in patients with aBMD values outside the pathological range. Subject-specific finite element models derived from computed tomography data are considered better tools to non-invasively assess hip fracture risk. In particular, the Bologna Biomechanical Computed Tomography (BBCT) is an In Silico methodology that uses a subject specific FE model to predict bone strength. Different studies demonstrated that the modeling pipeline can increase predictive accuracy of osteoporosis detection and assess the efficacy of new antiresorptive drugs. However, one critical aspect that must be properly addressed before using the technology in the clinical practice, is the assessment of the model credibility. The aim of this study was to define and perform verification and uncertainty quantification analyses on the BBCT methodology following the risk-based credibility assessment framework recently proposed in the VV-40 standard. The analyses focused on the main verification tests used in computational solid mechanics: force and moment equilibrium check, mesh convergence analyses, mesh quality metrics study, evaluation of the uncertainties associated to the definition of the boundary conditions and material properties mapping. Results of these analyses showed that the FE model is correctly implemented and solved. The operation that mostly affect the model results is the material properties mapping step. This work represents an important step that, together with the ongoing clinical validation activities, will contribute to demonstrate the credibility of the BBCT methodology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Deep Learning architectures give brilliant results in a large variety of fields, but a comprehensive theoretical description of their inner functioning is still lacking. In this work, we try to understand the behavior of neural networks by modelling in the frameworks of Thermodynamics and Condensed Matter Physics. We approach neural networks as in a real laboratory and we measure the frequency spectrum and the entropy of the weights of the trained model. The stochasticity of the training occupies a central role in the dynamics of the weights and makes it difficult to assimilate neural networks to simple physical systems. However, the analogy with Thermodynamics and the introduction of a well defined temperature leads us to an interesting result: if we eliminate from a CNN the "hottest" filters, the performance of the model remains the same, whereas, if we eliminate the "coldest" ones, the performance gets drastically worst. This result could be exploited in the realization of a training loop which eliminates the filters that do not contribute to loss reduction. In this way, the computational cost of the training will be lightened and more importantly this would be done by following a physical model. In any case, beside important practical applications, our analysis proves that a new and improved modeling of Deep Learning systems can pave the way to new and more efficient algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When it comes to designing a structure, architects and engineers want to join forces in order to create and build the most beautiful and efficient building. From finding new shapes and forms to optimizing the stability and the resistance, there is a constant link to be made between both professions. In architecture, there has always been a particular interest in creating new shapes and types of a structure inspired by many different fields, one of them being nature itself. In engineering, the selection of optimum has always dictated the way of thinking and designing structures. This mindset led through studies to the current best practices in construction. However, both disciplines were limited by the traditional manufacturing constraints at a certain point. Over the last decades, much progress was made from a technological point of view, allowing to go beyond today's manufacturing constraints. With the emergence of Wire-and-Arc Additive Manufacturing (WAAM) combined with Algorithmic-Aided Design (AAD), architects and engineers are offered new opportunities to merge architectural beauty and structural efficiency. Both technologies allow for exploring and building unusual and complex structural shapes in addition to a reduction of costs and environmental impacts. Through this study, the author wants to make use of previously mentioned technologies and assess their potential, first to design an aesthetically appreciated tree-like column with the idea of secondly proposing a new type of standardized and optimized sandwich cross-section to the construction industry. Parametric algorithms to model the dendriform column and the new sandwich cross-section are developed and presented in detail. A catalog draft of the latter and methods to establish it are then proposed and discussed. Finally, the buckling behavior of this latter is assessed considering standard steel and WAAM material properties.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Historic vaulted masonry structures often need strengthening interventions that can effectively improve their structural performance, especially during seismic events, and at the same time respect the existing setting and the modern conservation requirements. In this context, the use of innovative materials such as fiber-reinforced composite materials has been shown as an effective solution that can satisfy both aspects. This work aims to provide insight into the computational modeling of a full-scale masonry vault strengthened by fiber-reinforced composite materials and analyze the influence of the arrangement of the reinforcement on the efficiency of the intervention. At first, a parametric model of a cross vault focusing on a realistic representation of its micro-geometry is proposed. Then numerical modeling, simulating the pushover analyses, of several barrel vaults reinforced with different reinforcement configurations is performed. Finally, the results are collected and discussed in terms of force-displacement curves obtained for each proposed configuration.