869 resultados para swd: Computational geometry
Resumo:
[EN]We have recently introduced a new strategy, based on the meccano method [1, 2], to construct a T-spline parameterization of 2D and 3D geometries for the application of iso geometric analysis [3, 4]. The proposed method only demands a boundary representation of the geometry as input data. The algorithm obtains, as a result, high quality parametric transformation between the objects and the parametric domain, i.e. the meccano. The key of the method lies in de_ning an isomorphic transformation between the parametric and physical T-mesh _nding the optimal position of the interior nodes, once the meccano boundary nodes are mapped to the boundary of the physical domain…
Resumo:
[EN]This work presents a novel approach to solve a two dimensional problem by using an adaptive finite element approach. The most common strategy to deal with nested adaptivity is to generate a mesh that represents the geometry and the input parameters correctly, and to refine this mesh locally to obtain the most accurate solution. As opposed to this approach, the authors propose a technique using independent meshes : geometry, input data and the unknowns. Each particular mesh is obtained by a local nested refinement of the same coarse mesh at the parametric space…
Resumo:
The vast majority of known proteins have not yet been experimentally characterized and little is known about their function. The design and implementation of computational tools can provide insight into the function of proteins based on their sequence, their structure, their evolutionary history and their association with other proteins. Knowledge of the three-dimensional (3D) structure of a protein can lead to a deep understanding of its mode of action and interaction, but currently the structures of <1% of sequences have been experimentally solved. For this reason, it became urgent to develop new methods that are able to computationally extract relevant information from protein sequence and structure. The starting point of my work has been the study of the properties of contacts between protein residues, since they constrain protein folding and characterize different protein structures. Prediction of residue contacts in proteins is an interesting problem whose solution may be useful in protein folding recognition and de novo design. The prediction of these contacts requires the study of the protein inter-residue distances related to the specific type of amino acid pair that are encoded in the so-called contact map. An interesting new way of analyzing those structures came out when network studies were introduced, with pivotal papers demonstrating that protein contact networks also exhibit small-world behavior. In order to highlight constraints for the prediction of protein contact maps and for applications in the field of protein structure prediction and/or reconstruction from experimentally determined contact maps, I studied to which extent the characteristic path length and clustering coefficient of the protein contacts network are values that reveal characteristic features of protein contact maps. Provided that residue contacts are known for a protein sequence, the major features of its 3D structure could be deduced by combining this knowledge with correctly predicted motifs of secondary structure. In the second part of my work I focused on a particular protein structural motif, the coiled-coil, known to mediate a variety of fundamental biological interactions. Coiled-coils are found in a variety of structural forms and in a wide range of proteins including, for example, small units such as leucine zippers that drive the dimerization of many transcription factors or more complex structures such as the family of viral proteins responsible for virus-host membrane fusion. The coiled-coil structural motif is estimated to account for 5-10% of the protein sequences in the various genomes. Given their biological importance, in my work I introduced a Hidden Markov Model (HMM) that exploits the evolutionary information derived from multiple sequence alignments, to predict coiled-coil regions and to discriminate coiled-coil sequences. The results indicate that the new HMM outperforms all the existing programs and can be adopted for the coiled-coil prediction and for large-scale genome annotation. Genome annotation is a key issue in modern computational biology, being the starting point towards the understanding of the complex processes involved in biological networks. The rapid growth in the number of protein sequences and structures available poses new fundamental problems that still deserve an interpretation. Nevertheless, these data are at the basis of the design of new strategies for tackling problems such as the prediction of protein structure and function. Experimental determination of the functions of all these proteins would be a hugely time-consuming and costly task and, in most instances, has not been carried out. As an example, currently, approximately only 20% of annotated proteins in the Homo sapiens genome have been experimentally characterized. A commonly adopted procedure for annotating protein sequences relies on the "inheritance through homology" based on the notion that similar sequences share similar functions and structures. This procedure consists in the assignment of sequences to a specific group of functionally related sequences which had been grouped through clustering techniques. The clustering procedure is based on suitable similarity rules, since predicting protein structure and function from sequence largely depends on the value of sequence identity. However, additional levels of complexity are due to multi-domain proteins, to proteins that share common domains but that do not necessarily share the same function, to the finding that different combinations of shared domains can lead to different biological roles. In the last part of this study I developed and validate a system that contributes to sequence annotation by taking advantage of a validated transfer through inheritance procedure of the molecular functions and of the structural templates. After a cross-genome comparison with the BLAST program, clusters were built on the basis of two stringent constraints on sequence identity and coverage of the alignment. The adopted measure explicity answers to the problem of multi-domain proteins annotation and allows a fine grain division of the whole set of proteomes used, that ensures cluster homogeneity in terms of sequence length. A high level of coverage of structure templates on the length of protein sequences within clusters ensures that multi-domain proteins when present can be templates for sequences of similar length. This annotation procedure includes the possibility of reliably transferring statistically validated functions and structures to sequences considering information available in the present data bases of molecular functions and structures.
Resumo:
Some fundamental biological processes such as embryonic development have been preserved during evolution and are common to species belonging to different phylogenetic positions, but are nowadays largely unknown. The understanding of cell morphodynamics leading to the formation of organized spatial distribution of cells such as tissues and organs can be achieved through the reconstruction of cells shape and position during the development of a live animal embryo. We design in this work a chain of image processing methods to automatically segment and track cells nuclei and membranes during the development of a zebrafish embryo, which has been largely validates as model organism to understand vertebrate development, gene function and healingrepair mechanisms in vertebrates. The embryo is previously labeled through the ubiquitous expression of fluorescent proteins addressed to cells nuclei and membranes, and temporal sequences of volumetric images are acquired with laser scanning microscopy. Cells position is detected by processing nuclei images either through the generalized form of the Hough transform or identifying nuclei position with local maxima after a smoothing preprocessing step. Membranes and nuclei shapes are reconstructed by using PDEs based variational techniques such as the Subjective Surfaces and the Chan Vese method. Cells tracking is performed by combining informations previously detected on cells shape and position with biological regularization constraints. Our results are manually validated and reconstruct the formation of zebrafish brain at 7-8 somite stage with all the cells tracked starting from late sphere stage with less than 2% error for at least 6 hours. Our reconstruction opens the way to a systematic investigation of cellular behaviors, of clonal origin and clonal complexity of brain organs, as well as the contribution of cell proliferation modes and cell movements to the formation of local patterns and morphogenetic fields.
Resumo:
The structural peculiarities of a protein are related to its biological function. In the fatty acid elongation cycle, one small carrier protein shuttles and delivers the acyl intermediates from one enzyme to the other. The carrier has to recognize several enzymatic counterparts, specifically interact with each of them, and finally transiently deliver the carried substrate to the active site. Carry out such a complex game requires the players to be flexible and efficiently adapt their structure to the interacting protein or substrate. In a drug discovery effort, the structure-function relationships of a target system should be taken into account to optimistically interfere with its biological function. In this doctoral work, the essential role of structural plasticity in key steps of fatty acid biosynthesis in Plasmodium falciparum is investigated by means of molecular simulations. The key steps considered include the delivery of acyl substrates and the structural rearrangements of catalytic pockets upon ligand binding. The ground-level bases for carrier/enzyme recognition and interaction are also put forward. The structural features of the target have driven the selection of proper drug discovery tools, which captured the dynamics of biological processes and could allow the rational design of novel inhibitors. The model may be perspectively used for the identification of novel pathway-based antimalarial compounds.
Resumo:
The work of my thesis is focused on the impact of tsunami waves in limited basins. By limited basins I mean here those basins capable of modifying significantly the tsunami signal with respect to the surrounding open sea. Based on this definition, we consider limited basins not only harbours but also straits, channels, seamounts and oceanic shelves. I have considered two different examples, one dealing with the Seychelles Island platform in the Indian Ocean, the second focussing on the Messina Strait and the harbour of the Messina city itself (Italy). The Seychelles platform is differentiated at bathymetric level from the surrounding ocean, with rapid changes from 2 km to 70 meters over short horizontal distances. The study of the platform response to the tsunami propagation is based on the simulation of the mega-event occurred on 26 December 2004. Based on a hypothesis for the earthquake causative fault, the ensuing tsunami has been numerically simulated. I analysed synthetic tide gauge records at several virtual tide gauges aligned along the direction going from the source to the platform. A substantial uniformity of tsunami signals in all calculated open ocean tide-gauge records is observed, while the signals calculated in two points of the Seychelles platform show different features both in terms of amplitude and period of the perturbation. To better understand the content in frequency of different calculated marigrams, a spectral analysis was carried out. In particular the ratio between the calculated tide-gauge records spectrum on the platform and the average tide-gauge records in the open ocean was considered. The main result is that, while in the average spectrum in the open ocean the fundamental peak is related to the source, the platform introduces further peaks linked both to the bathymetric configuration and to coastal geometry. The Messina Strait represents an interesting case because it consists in a sort of a channel open both in the north and in the south and furthermore contains the limited basin of the Messina harbour. In this case the study has been carried out in a different way with respect to the Seychelles case. The basin was forced along a boundary of the computational domain with sinusoidal functions having different periods within the typical tsunami frequencies. The tsunami has been simulated numerically and in particular the tide-gauge records were calculated for every forcing function in different points both externally and internally of the channel and of the Messina harbour. Apart from the tide-gauge records in the source region that almost immediately reach stationarity, all the computed signals in the channel and in the Messina harbour present a transient variable amplitude followed by a stationary part. Based exclusively on this last part, I calculated the amplification curves for each site. I found that the maximum amplification is obtained for forcing periods of approximately 10 minutes.
Resumo:
The wheel - rail contact analysis plays a fundamental role in the multibody modeling of railway vehicles. A good contact model must provide an accurate description of the global contact phenomena (contact forces and torques, number and position of the contact points) and of the local contact phenomena (position and shape of the contact patch, stresses and displacements). The model has also to assure high numerical efficiency (in order to be implemented directly online within multibody models) and a good compatibility with commercial multibody software (Simpack Rail, Adams Rail). The wheel - rail contact problem has been discussed by several authors and many models can be found in the literature. The contact models can be subdivided into two different categories: the global models and the local (or differential) models. Currently, as regards the global models, the main approaches to the problem are the so - called rigid contact formulation and the semi – elastic contact description. The rigid approach considers the wheel and the rail as rigid bodies. The contact is imposed by means of constraint equations and the contact points are detected during the dynamic simulation by solving the nonlinear algebraic differential equations associated to the constrained multibody system. Indentation between the bodies is not permitted and the normal contact forces are calculated through the Lagrange multipliers. Finally the Hertz’s and the Kalker’s theories allow to evaluate the shape of the contact patch and the tangential forces respectively. Also the semi - elastic approach considers the wheel and the rail as rigid bodies. However in this case no kinematic constraints are imposed and the indentation between the bodies is permitted. The contact points are detected by means of approximated procedures (based on look - up tables and simplifying hypotheses on the problem geometry). The normal contact forces are calculated as a function of the indentation while, as in the rigid approach, the Hertz’s and the Kalker’s theories allow to evaluate the shape of the contact patch and the tangential forces. Both the described multibody approaches are computationally very efficient but their generality and accuracy turn out to be often insufficient because the physical hypotheses behind these theories are too restrictive and, in many circumstances, unverified. In order to obtain a complete description of the contact phenomena, local (or differential) contact models are needed. In other words wheel and rail have to be considered elastic bodies governed by the Navier’s equations and the contact has to be described by suitable analytical contact conditions. The contact between elastic bodies has been widely studied in literature both in the general case and in the rolling case. Many procedures based on variational inequalities, FEM techniques and convex optimization have been developed. This kind of approach assures high generality and accuracy but still needs very large computational costs and memory consumption. Due to the high computational load and memory consumption, referring to the current state of the art, the integration between multibody and differential modeling is almost absent in literature especially in the railway field. However this integration is very important because only the differential modeling allows an accurate analysis of the contact problem (in terms of contact forces and torques, position and shape of the contact patch, stresses and displacements) while the multibody modeling is the standard in the study of the railway dynamics. In this thesis some innovative wheel – rail contact models developed during the Ph. D. activity will be described. Concerning the global models, two new models belonging to the semi – elastic approach will be presented; the models satisfy the following specifics: 1) the models have to be 3D and to consider all the six relative degrees of freedom between wheel and rail 2) the models have to consider generic railway tracks and generic wheel and rail profiles 3) the models have to assure a general and accurate handling of the multiple contact without simplifying hypotheses on the problem geometry; in particular the models have to evaluate the number and the position of the contact points and, for each point, the contact forces and torques 4) the models have to be implementable directly online within the multibody models without look - up tables 5) the models have to assure computation times comparable with those of commercial multibody software (Simpack Rail, Adams Rail) and compatible with RT and HIL applications 6) the models have to be compatible with commercial multibody software (Simpack Rail, Adams Rail). The most innovative aspect of the new global contact models regards the detection of the contact points. In particular both the models aim to reduce the algebraic problem dimension by means of suitable analytical techniques. This kind of reduction allows to obtain an high numerical efficiency that makes possible the online implementation of the new procedure and the achievement of performance comparable with those of commercial multibody software. At the same time the analytical approach assures high accuracy and generality. Concerning the local (or differential) contact models, one new model satisfying the following specifics will be presented: 1) the model has to be 3D and to consider all the six relative degrees of freedom between wheel and rail 2) the model has to consider generic railway tracks and generic wheel and rail profiles 3) the model has to assure a general and accurate handling of the multiple contact without simplifying hypotheses on the problem geometry; in particular the model has to able to calculate both the global contact variables (contact forces and torques) and the local contact variables (position and shape of the contact patch, stresses and displacements) 4) the model has to be implementable directly online within the multibody models 5) the model has to assure high numerical efficiency and a reduced memory consumption in order to achieve a good integration between multibody and differential modeling (the base for the local contact models) 6) the model has to be compatible with commercial multibody software (Simpack Rail, Adams Rail). In this case the most innovative aspects of the new local contact model regard the contact modeling (by means of suitable analytical conditions) and the implementation of the numerical algorithms needed to solve the discrete problem arising from the discretization of the original continuum problem. Moreover, during the development of the local model, the achievement of a good compromise between accuracy and efficiency turned out to be very important to obtain a good integration between multibody and differential modeling. At this point the contact models has been inserted within a 3D multibody model of a railway vehicle to obtain a complete model of the wagon. The railway vehicle chosen as benchmark is the Manchester Wagon the physical and geometrical characteristics of which are easily available in the literature. The model of the whole railway vehicle (multibody model and contact model) has been implemented in the Matlab/Simulink environment. The multibody model has been implemented in SimMechanics, a Matlab toolbox specifically designed for multibody dynamics, while, as regards the contact models, the CS – functions have been used; this particular Matlab architecture allows to efficiently connect the Matlab/Simulink and the C/C++ environment. The 3D multibody model of the same vehicle (this time equipped with a standard contact model based on the semi - elastic approach) has been then implemented also in Simpack Rail, a commercial multibody software for railway vehicles widely tested and validated. Finally numerical simulations of the vehicle dynamics have been carried out on many different railway tracks with the aim of evaluating the performances of the whole model. The comparison between the results obtained by the Matlab/ Simulink model and those obtained by the Simpack Rail model has allowed an accurate and reliable validation of the new contact models. In conclusion to this brief introduction to my Ph. D. thesis, we would like to thank Trenitalia and the Regione Toscana for the support provided during all the Ph. D. activity. Moreover we would also like to thank the INTEC GmbH, the society the develops the software Simpack Rail, with which we are currently working together to develop innovative toolboxes specifically designed for the wheel rail contact analysis.
Resumo:
In the post genomic era with the massive production of biological data the understanding of factors affecting protein stability is one of the most important and challenging tasks for highlighting the role of mutations in relation to human maladies. The problem is at the basis of what is referred to as molecular medicine with the underlying idea that pathologies can be detailed at a molecular level. To this purpose scientific efforts focus on characterising mutations that hamper protein functions and by these affect biological processes at the basis of cell physiology. New techniques have been developed with the aim of detailing single nucleotide polymorphisms (SNPs) at large in all the human chromosomes and by this information in specific databases are exponentially increasing. Eventually mutations that can be found at the DNA level, when occurring in transcribed regions may then lead to mutated proteins and this can be a serious medical problem, largely affecting the phenotype. Bioinformatics tools are urgently needed to cope with the flood of genomic data stored in database and in order to analyse the role of SNPs at the protein level. In principle several experimental and theoretical observations are suggesting that protein stability in the solvent-protein space is responsible of the correct protein functioning. Then mutations that are found disease related during DNA analysis are often assumed to perturb protein stability as well. However so far no extensive analysis at the proteome level has investigated whether this is the case. Also computationally methods have been developed to infer whether a mutation is disease related and independently whether it affects protein stability. Therefore whether the perturbation of protein stability is related to what it is routinely referred to as a disease is still a big question mark. In this work we have tried for the first time to explore the relation among mutations at the protein level and their relevance to diseases with a large-scale computational study of the data from different databases. To this aim in the first part of the thesis for each mutation type we have derived two probabilistic indices (for 141 out of 150 possible SNPs): the perturbing index (Pp), which indicates the probability that a given mutation effects protein stability considering all the “in vitro” thermodynamic data available and the disease index (Pd), which indicates the probability of a mutation to be disease related, given all the mutations that have been clinically associated so far. We find with a robust statistics that the two indexes correlate with the exception of all the mutations that are somatic cancer related. By this each mutation of the 150 can be coded by two values that allow a direct comparison with data base information. Furthermore we also implement computational methods that starting from the protein structure is suited to predict the effect of a mutation on protein stability and find that overpasses a set of other predictors performing the same task. The predictor is based on support vector machines and takes as input protein tertiary structures. We show that the predicted data well correlate with the data from the databases. All our efforts therefore add to the SNP annotation process and more importantly found the relationship among protein stability perturbation and the human variome leading to the diseasome.
Resumo:
The present thesis is divided into two main research areas: Classical Cosmology and (Loop) Quantum Gravity. The first part concerns cosmological models with one phantom and one scalar field, that provide the `super-accelerated' scenario not excluded by observations, thus exploring alternatives to the standard LambdaCDM scenario. The second part concerns the spinfoam approach to (Loop) Quantum Gravity, which is an attempt to provide a `sum-over-histories' formulation of gravitational quantum transition amplitudes. The research here presented focuses on the face amplitude of a generic spinfoam model for Quantum Gravity.
Resumo:
Proper ion channels’ functioning is a prerequisite for a normal cell and disorders involving ion channels, or channelopathies, underlie many human diseases. Long QT syndromes (LQTS) for example may arise from the malfunctioning of hERG channel, caused either by the binding of drugs or mutations in HERG gene. In the first part of this thesis I present a framework to investigate the mechanism of ion conduction through hERG channel. The free energy profile governing the elementary steps of ion translocation in the pore was computed by means of umbrella sampling simulations. Compared to previous studies, we detected a different dynamic behavior: according to our data hERG is more likely to mediate a conduction mechanism which has been referred to as “single-vacancy-like” by Roux and coworkers (2001), rather then a “knock-on” mechanism. The same protocol was applied to a model of hERG presenting the Gly628Ser mutation, found to be cause of congenital LQTS. The results provided interesting insights about the reason of the malfunctioning of the mutant channel. Since they have critical functions in viruses’ life cycle, viral ion channels, such as M2 proton channel, are considered attractive targets for antiviral therapy. A deep knowledge of the mechanisms that the virus employs to survive in the host cell is of primary importance in the identification of new antiviral strategies. In the second part of this thesis I shed light on the role that M2 plays in the control of electrical potential inside the virus, being the charge equilibration a condition required to allow proton influx. The ion conduction through M2 was simulated using metadynamics technique. Based on our results we suggest that a potential anion-mediated cation-proton exchange, as well as a direct anion-proton exchange could both contribute to explain the activity of the M2 channel.
Resumo:
Coupled-Cluster-Theorie (CC) ist in der heutigen Quantenchemie eine der erfolgreichsten Methoden zur genauen Beschreibung von Molekülen. Die in dieser Arbeit vorgestellten Ergebnisse zeigen, daß neben den Berechnungen von Energien eine Reihe von Eigenschaften wie Strukturparameter, Schwingungsfrequenzen und Rotations-Schwingungs-Parameter kleiner und mittelgrofler Moleküle zuverlässig und präzise vorhergesagt werden können. Im ersten Teil der Arbeit wird mit dem Spin-adaptierten Coupled-Cluster-Ansatz (SA-CC) ein neuer Weg zur Verbesserung der Beschreibung von offenschaligen Systemen vorgestellt. Dabei werden zur Bestimmung der unbekannten Wellenfunktionsparameter zusätzlich die CC-Spingleichungen gelöst. Durch dieses Vorgehen wird gewährleistet, daß die erhaltene Wellenfunktion eine Spineigenfunktion ist. Die durchgeführte Implementierung des Spin-adaptierten CC-Ansatzes unter Berücksichtigung von Einfach- und Zweifachanregungen (CCSD) für high-spin Triplett-Systeme wird ausführlich erläutert. Im zweiten Teil werden CC-Additionsschemata vorgestellt, die auf der Annahme der Additivität von Elektronenkorrelations- und Basissatzeffekten basieren. Die etablierte Vorgehensweise, verschiedene Beiträge zur Energie mit an den Rechenaufwand angepassten Basissätzen separat zu berechnen und aufzusummieren, wird hier auf Gradienten und Kraftkonstanten übertragen. Für eine Beschreibung von Bindungslängen und harmonischen Schwingungsfrequenzen mit experimenteller Genauigkeit ist die Berücksichtigung von Innerschalenkorrelationseffekten sowie Dreifach- und Vierfachanregungen im Clusteroperator der Wellenfunktion nötig. Die Basissatzkonvergenz wird dabei zusätzlich mit Extrapolationsmethoden beschleunigt. Die quantitative Vorhersage der Bindungslängen von 17 kleinen Molekülen, aufgebaut aus Atomen der ersten Langperiode, ist so mit einer Genauigkeit von wenigen Hundertstel Pikometern möglich. Für die Schwingungsfrequenzen dieser Moleküle weist das CC-Additionsschema basierend auf den berechneten Kraftkonstanten im Vergleich zu experimentellen Ergebnissen einen mittleren absoluten Fehler von 3.5 cm-1 und eine Standardabweichung von 2.2 cm-1 auf. Darüber hinaus werden zur Unterstützung von experimentellen Untersuchungen berechnete spektroskopische Daten einiger größerer Moleküle vorgelegt. Die in dieser Arbeit vorgestellten Untersuchungen zur Isomerisierung von Dihalogensulfanen XSSX (X= F, Cl) oder die Berechnung von Struktur- und Rotations-Schwingungs-Parametern für die Moleküle CHCl2F und CHClF2 zeigen, daß bereits störungstheoretische CCSD(T)-Näherungsmethoden qualitativ gute Vorhersagen experimenteller Resultate liefern. Desweiteren werden Diskrepanzen von experimentellen und berechneten Bindungsabständen bei den Molekülen Borhydrid- und Carbenylium durch die Berücksichtigung des elektronischen Beitrages zum Trägheitsmoment beseitigt.
Resumo:
The Spin-Statistics theorem states that the statistics of a system of identical particles is determined by their spin: Particles of integer spin are Bosons (i.e. obey Bose-Einstein statistics), whereas particles of half-integer spin are Fermions (i.e. obey Fermi-Dirac statistics). Since the original proof by Fierz and Pauli, it has been known that the connection between Spin and Statistics follows from the general principles of relativistic Quantum Field Theory. In spite of this, there are different approaches to Spin-Statistics and it is not clear whether the theorem holds under assumptions that are different, and even less restrictive, than the usual ones (e.g. Lorentz-covariance). Additionally, in Quantum Mechanics there is a deep relation between indistinguishabilty and the geometry of the configuration space. This is clearly illustrated by Gibbs' paradox. Therefore, for many years efforts have been made in order to find a geometric proof of the connection between Spin and Statistics. Recently, various proposals have been put forward, in which an attempt is made to derive the Spin-Statistics connection from assumptions different from the ones used in the relativistic, quantum field theoretic proofs. Among these, there is the one due to Berry and Robbins (BR), based on the postulation of a certain single-valuedness condition, that has caused a renewed interest in the problem. In the present thesis, we consider the problem of indistinguishability in Quantum Mechanics from a geometric-algebraic point of view. An approach is developed to study configuration spaces Q having a finite fundamental group, that allows us to describe different geometric structures of Q in terms of spaces of functions on the universal cover of Q. In particular, it is shown that the space of complex continuous functions over the universal cover of Q admits a decomposition into C(Q)-submodules, labelled by the irreducible representations of the fundamental group of Q, that can be interpreted as the spaces of sections of certain flat vector bundles over Q. With this technique, various results pertaining to the problem of quantum indistinguishability are reproduced in a clear and systematic way. Our method is also used in order to give a global formulation of the BR construction. As a result of this analysis, it is found that the single-valuedness condition of BR is inconsistent. Additionally, a proposal aiming at establishing the Fermi-Bose alternative, within our approach, is made.
Resumo:
A numerical study using Large Eddy Simulation Coherent Structure Model (LES-CSM), of the flow around a simplified Ahmed body, has been done in this work of thesis. The models used are two salient geometries from the experimental investigation performed in [1], and consist, in particular, in two notch-back body geometries. Six simulation are carried out in total, changing Reynolds number and back-light angle of the model’s rear part. The Reynolds numbers used, based on the height of the models and the free stream velocity, are Re = 10000, Re = 30000 and Re = 50000. The back-light angles of the slanted surface with respect to the horizontal roof surface, that characterizes the vehicle, are taken as B = 31.8◦ and B = 42◦ respectively. The experimental results in [1] have shown that, depending on the parameter B, asymmetric and symmetric averaged flow over the back-light and in the wake for a symmetric geometry can be observed. The aims of the present work of master thesis are principally two. The first aim is to investigate and confirm the influence of the parameter B on the presence of the asymmetry of the averaged flow, and confirm the features described in the experimental results. The second important aspect is to investigate and observe the influence of the second variable, the Reynolds number, in the developing of the asymmetric flow itself. The results have shown the presence of the mentioned asymmetry as well as an influence of the Reynolds number on it.
Resumo:
In this study new tomographic models of Colombia were calculated. I used the seismicity recorded by the Colombian seismic network during the period 2006-2009. In this time period, the improvement of the seismic network yields more stable hypocentral results with respect to older data set and allows to compute new 3D Vp and Vp/Vs models. The final dataset consists of 10813 P- and 8614 S-arrival times associated to 1405 earthquakes. Tests with synthetic data and resolution analysis indicate that velocity models are well constrained in central, western and southwestern Colombia to a depth of 160 km; the resolution is poor in the northern Colombia and close to Venezuela due to a lack of seismic stations and seismicity. The tomographic models and the relocated seismicity indicate the existence of E-SE subducting Nazca lithosphere beneath central and southern Colombia. The North-South changes in Wadati-Benioff zone, Vp & Vp/Vs pattern and volcanism, show that the downgoing plate is segmented by slab tears E-W directed, suggesting the presence of three sectors. Earthquakes in the northernmost sector represent most of the Colombian seimicity and concentrated on 100-170 km depth interval, beneath the Eastern Cordillera. Here a massive dehydration is inferred, resulting from a delay in the eclogitization of a thickened oceanic crust in a flat-subduction geometry. In this sector a cluster of intermediate-depth seismicity (Bucaramanga Nest) is present beneath the elbow of the Eastern Cordillera, interpreted as the result of massive and highly localized dehydration phenomenon caused by a hyper-hydrous oceanic crust. The central and southern sectors, although different in Vp pattern show, conversely, a continuous, steep and more homogeneous Wadati-Benioff zone with overlying volcanic areas. Here a "normalthickened" oceanic crust is inferred, allowing for a gradual and continuous metamorphic reactions to take place with depth, enabling the fluid migration towards the mantle wedge.
Resumo:
La tesi di Dottorato studia il flusso sanguigno tramite un codice agli elementi finiti (COMSOL Multiphysics). Nell’arteria è presente un catetere Doppler (in posizione concentrica o decentrata rispetto all’asse di simmetria) o di stenosi di varia forma ed estensione. Le arterie sono solidi cilindrici rigidi, elastici o iperelastici. Le arterie hanno diametri di 6 mm, 5 mm, 4 mm e 2 mm. Il flusso ematico è in regime laminare stazionario e transitorio, ed il sangue è un fluido non-Newtoniano di Casson, modificato secondo la formulazione di Gonzales & Moraga. Le analisi numeriche sono realizzate in domini tridimensionali e bidimensionali, in quest’ultimo caso analizzando l’interazione fluido-strutturale. Nei casi tridimensionali, le arterie (simulazioni fluidodinamiche) sono infinitamente rigide: ricavato il campo di pressione si procede quindi all’analisi strutturale, per determinare le variazioni di sezione e la permanenza del disturbo sul flusso. La portata sanguigna è determinata nei casi tridimensionali con catetere individuando tre valori (massimo, minimo e medio); mentre per i casi 2D e tridimensionali con arterie stenotiche la legge di pressione riproduce l’impulso ematico. La mesh è triangolare (2D) o tetraedrica (3D), infittita alla parete ed a valle dell’ostacolo, per catturare le ricircolazioni. Alla tesi sono allegate due appendici, che studiano con codici CFD la trasmissione del calore in microcanali e l’ evaporazione di gocce d’acqua in sistemi non confinati. La fluidodinamica nei microcanali è analoga all’emodinamica nei capillari. Il metodo Euleriano-Lagrangiano (simulazioni dell’evaporazione) schematizza la natura mista del sangue. La parte inerente ai microcanali analizza il transitorio a seguito dell’applicazione di un flusso termico variabile nel tempo, variando velocità in ingresso e dimensioni del microcanale. L’indagine sull’evaporazione di gocce è un’analisi parametrica in 3D, che esamina il peso del singolo parametro (temperatura esterna, diametro iniziale, umidità relativa, velocità iniziale, coefficiente di diffusione) per individuare quello che influenza maggiormente il fenomeno.