17 resultados para structure, analysis, modeling
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
The aim of this thesis, included within the THESEUS project, is the development of a mathematical model 2DV two-phase, based on the existing code IH-2VOF developed by the University of Cantabria, able to represent together the overtopping phenomenon and the sediment transport. Several numerical simulations were carried out in order to analyze the flow characteristics on a dike crest. The results show that the seaward/landward slope does not affect the evolution of the flow depth and velocity over the dike crest whereas the most important parameter is the relative submergence. Wave heights decrease and flow velocities increase while waves travel over the crest. In particular, by increasing the submergence, the wave height decay and the increase of the velocity are less marked. Besides, an appropriate curve able to fit the variation of the wave height/velocity over the dike crest were found. Both for the wave height and for the wave velocity different fitting coefficients were determined on the basis of the submergence and of the significant wave height. An equation describing the trend of the dimensionless coefficient c_h for the wave height was derived. These conclusions could be taken into consideration for the design criteria and the upgrade of the structures. In the second part of the thesis, new equations for the representation of the sediment transport in the IH-2VOF model were introduced in order to represent beach erosion while waves run-up and overtop the sea banks during storms. The new model allows to calculate sediment fluxes in the water column together with the sediment concentration. Moreover it is possible to model the bed profile evolution. Different tests were performed under low-intensity regular waves with an homogeneous layer of sand on the bottom of a channel in order to analyze the erosion-deposition patterns and verify the model results.
Resumo:
This thesis is divided in three chapters. In the first chapter we analyse the results of the world forecasting experiment run by the Collaboratory for the Study of Earthquake Predictability (CSEP). We take the opportunity of this experiment to contribute to the definition of a more robust and reliable statistical procedure to evaluate earthquake forecasting models. We first present the models and the target earthquakes to be forecast. Then we explain the consistency and comparison tests that are used in CSEP experiments to evaluate the performance of the models. Introducing a methodology to create ensemble forecasting models, we show that models, when properly combined, are almost always better performing that any single model. In the second chapter we discuss in depth one of the basic features of PSHA: the declustering of the seismicity rates. We first introduce the Cornell-McGuire method for PSHA and we present the different motivations that stand behind the need of declustering seismic catalogs. Using a theorem of the modern probability (Le Cam's theorem) we show that the declustering is not necessary to obtain a Poissonian behaviour of the exceedances that is usually considered fundamental to transform exceedance rates in exceedance probabilities in the PSHA framework. We present a method to correct PSHA for declustering, building a more realistic PSHA. In the last chapter we explore the methods that are commonly used to take into account the epistemic uncertainty in PSHA. The most widely used method is the logic tree that stands at the basis of the most advanced seismic hazard maps. We illustrate the probabilistic structure of the logic tree, and then we show that this structure is not adequate to describe the epistemic uncertainty. We then propose a new probabilistic framework based on the ensemble modelling that properly accounts for epistemic uncertainties in PSHA.
Resumo:
Ion channels are protein molecules, embedded in the lipid bilayer of the cell membranes. They act as powerful sensing elements switching chemicalphysical stimuli into ion-fluxes. At a glance, ion channels are water-filled pores, which can open and close in response to different stimuli (gating), and one once open select the permeating ion species (selectivity). They play a crucial role in several physiological functions, like nerve transmission, muscular contraction, and secretion. Besides, ion channels can be used in technological applications for different purpose (sensing of organic molecules, DNA sequencing). As a result, there is remarkable interest in understanding the molecular determinants of the channel functioning. Nowadays, both the functional and the structural characteristics of ion channels can be experimentally solved. The purpose of this thesis was to investigate the structure-function relation in ion channels, by computational techniques. Most of the analyses focused on the mechanisms of ion conduction, and the numerical methodologies to compute the channel conductance. The standard techniques for atomistic simulation of complex molecular systems (Molecular Dynamics) cannot be routinely used to calculate ion fluxes in membrane channels, because of the high computational resources needed. The main step forward of the PhD research activity was the development of a computational algorithm for the calculation of ion fluxes in protein channels. The algorithm - based on the electrodiffusion theory - is computational inexpensive, and was used for an extensive analysis on the molecular determinants of the channel conductance. The first record of ion-fluxes through a single protein channel dates back to 1976, and since then measuring the single channel conductance has become a standard experimental procedure. Chapter 1 introduces ion channels, and the experimental techniques used to measure the channel currents. The abundance of functional data (channel currents) does not match with an equal abundance of structural data. The bacterial potassium channel KcsA was the first selective ion channels to be experimentally solved (1998), and after KcsA the structures of four different potassium channels were revealed. These experimental data inspired a new era in ion channel modeling. Once the atomic structures of channels are known, it is possible to define mathematical models based on physical descriptions of the molecular systems. These physically based models can provide an atomic description of ion channel functioning, and predict the effect of structural changes. Chapter 2 introduces the computation methods used throughout the thesis to model ion channels functioning at the atomic level. In Chapter 3 and Chapter 4 the ion conduction through potassium channels is analyzed, by an approach based on the Poisson-Nernst-Planck electrodiffusion theory. In the electrodiffusion theory ion conduction is modeled by the drift-diffusion equations, thus describing the ion distributions by continuum functions. The numerical solver of the Poisson- Nernst-Planck equations was tested in the KcsA potassium channel (Chapter 3), and then used to analyze how the atomic structure of the intracellular vestibule of potassium channels affects the conductance (Chapter 4). As a major result, a correlation between the channel conductance and the potassium concentration in the intracellular vestibule emerged. The atomic structure of the channel modulates the potassium concentration in the vestibule, thus its conductance. This mechanism explains the phenotype of the BK potassium channels, a sub-family of potassium channels with high single channel conductance. The functional role of the intracellular vestibule is also the subject of Chapter 5, where the affinity of the potassium channels hEag1 (involved in tumour-cell proliferation) and hErg (important in the cardiac cycle) for several pharmaceutical drugs was compared. Both experimental measurements and molecular modeling were used in order to identify differences in the blocking mechanism of the two channels, which could be exploited in the synthesis of selective blockers. The experimental data pointed out the different role of residue mutations in the blockage of hEag1 and hErg, and the molecular modeling provided a possible explanation based on different binding sites in the intracellular vestibule. Modeling ion channels at the molecular levels relates the functioning of a channel to its atomic structure (Chapters 3-5), and can also be useful to predict the structure of ion channels (Chapter 6-7). In Chapter 6 the structure of the KcsA potassium channel depleted from potassium ions is analyzed by molecular dynamics simulations. Recently, a surprisingly high osmotic permeability of the KcsA channel was experimentally measured. All the available crystallographic structure of KcsA refers to a channel occupied by potassium ions. To conduct water molecules potassium ions must be expelled from KcsA. The structure of the potassium-depleted KcsA channel and the mechanism of water permeation are still unknown, and have been investigated by numerical simulations. Molecular dynamics of KcsA identified a possible atomic structure of the potassium-depleted KcsA channel, and a mechanism for water permeation. The depletion from potassium ions is an extreme situation for potassium channels, unlikely in physiological conditions. However, the simulation of such an extreme condition could help to identify the structural conformations, so the functional states, accessible to potassium ion channels. The last chapter of the thesis deals with the atomic structure of the !- Hemolysin channel. !-Hemolysin is the major determinant of the Staphylococcus Aureus toxicity, and is also the prototype channel for a possible usage in technological applications. The atomic structure of !- Hemolysin was revealed by X-Ray crystallography, but several experimental evidences suggest the presence of an alternative atomic structure. This alternative structure was predicted, combining experimental measurements of single channel currents and numerical simulations. This thesis is organized in two parts, in the first part an overview on ion channels and on the numerical methods adopted throughout the thesis is provided, while the second part describes the research projects tackled in the course of the PhD programme. The aim of the research activity was to relate the functional characteristics of ion channels to their atomic structure. In presenting the different research projects, the role of numerical simulations to analyze the structure-function relation in ion channels is highlighted.
Resumo:
The vast majority of known proteins have not yet been experimentally characterized and little is known about their function. The design and implementation of computational tools can provide insight into the function of proteins based on their sequence, their structure, their evolutionary history and their association with other proteins. Knowledge of the three-dimensional (3D) structure of a protein can lead to a deep understanding of its mode of action and interaction, but currently the structures of <1% of sequences have been experimentally solved. For this reason, it became urgent to develop new methods that are able to computationally extract relevant information from protein sequence and structure. The starting point of my work has been the study of the properties of contacts between protein residues, since they constrain protein folding and characterize different protein structures. Prediction of residue contacts in proteins is an interesting problem whose solution may be useful in protein folding recognition and de novo design. The prediction of these contacts requires the study of the protein inter-residue distances related to the specific type of amino acid pair that are encoded in the so-called contact map. An interesting new way of analyzing those structures came out when network studies were introduced, with pivotal papers demonstrating that protein contact networks also exhibit small-world behavior. In order to highlight constraints for the prediction of protein contact maps and for applications in the field of protein structure prediction and/or reconstruction from experimentally determined contact maps, I studied to which extent the characteristic path length and clustering coefficient of the protein contacts network are values that reveal characteristic features of protein contact maps. Provided that residue contacts are known for a protein sequence, the major features of its 3D structure could be deduced by combining this knowledge with correctly predicted motifs of secondary structure. In the second part of my work I focused on a particular protein structural motif, the coiled-coil, known to mediate a variety of fundamental biological interactions. Coiled-coils are found in a variety of structural forms and in a wide range of proteins including, for example, small units such as leucine zippers that drive the dimerization of many transcription factors or more complex structures such as the family of viral proteins responsible for virus-host membrane fusion. The coiled-coil structural motif is estimated to account for 5-10% of the protein sequences in the various genomes. Given their biological importance, in my work I introduced a Hidden Markov Model (HMM) that exploits the evolutionary information derived from multiple sequence alignments, to predict coiled-coil regions and to discriminate coiled-coil sequences. The results indicate that the new HMM outperforms all the existing programs and can be adopted for the coiled-coil prediction and for large-scale genome annotation. Genome annotation is a key issue in modern computational biology, being the starting point towards the understanding of the complex processes involved in biological networks. The rapid growth in the number of protein sequences and structures available poses new fundamental problems that still deserve an interpretation. Nevertheless, these data are at the basis of the design of new strategies for tackling problems such as the prediction of protein structure and function. Experimental determination of the functions of all these proteins would be a hugely time-consuming and costly task and, in most instances, has not been carried out. As an example, currently, approximately only 20% of annotated proteins in the Homo sapiens genome have been experimentally characterized. A commonly adopted procedure for annotating protein sequences relies on the "inheritance through homology" based on the notion that similar sequences share similar functions and structures. This procedure consists in the assignment of sequences to a specific group of functionally related sequences which had been grouped through clustering techniques. The clustering procedure is based on suitable similarity rules, since predicting protein structure and function from sequence largely depends on the value of sequence identity. However, additional levels of complexity are due to multi-domain proteins, to proteins that share common domains but that do not necessarily share the same function, to the finding that different combinations of shared domains can lead to different biological roles. In the last part of this study I developed and validate a system that contributes to sequence annotation by taking advantage of a validated transfer through inheritance procedure of the molecular functions and of the structural templates. After a cross-genome comparison with the BLAST program, clusters were built on the basis of two stringent constraints on sequence identity and coverage of the alignment. The adopted measure explicity answers to the problem of multi-domain proteins annotation and allows a fine grain division of the whole set of proteomes used, that ensures cluster homogeneity in terms of sequence length. A high level of coverage of structure templates on the length of protein sequences within clusters ensures that multi-domain proteins when present can be templates for sequences of similar length. This annotation procedure includes the possibility of reliably transferring statistically validated functions and structures to sequences considering information available in the present data bases of molecular functions and structures.
Resumo:
This thesis presents a creative and practical approach to dealing with the problem of selection bias. Selection bias may be the most important vexing problem in program evaluation or in any line of research that attempts to assert causality. Some of the greatest minds in economics and statistics have scrutinized the problem of selection bias, with the resulting approaches – Rubin’s Potential Outcome Approach(Rosenbaum and Rubin,1983; Rubin, 1991,2001,2004) or Heckman’s Selection model (Heckman, 1979) – being widely accepted and used as the best fixes. These solutions to the bias that arises in particular from self selection are imperfect, and many researchers, when feasible, reserve their strongest causal inference for data from experimental rather than observational studies. The innovative aspect of this thesis is to propose a data transformation that allows measuring and testing in an automatic and multivariate way the presence of selection bias. The approach involves the construction of a multi-dimensional conditional space of the X matrix in which the bias associated with the treatment assignment has been eliminated. Specifically, we propose the use of a partial dependence analysis of the X-space as a tool for investigating the dependence relationship between a set of observable pre-treatment categorical covariates X and a treatment indicator variable T, in order to obtain a measure of bias according to their dependence structure. The measure of selection bias is then expressed in terms of inertia due to the dependence between X and T that has been eliminated. Given the measure of selection bias, we propose a multivariate test of imbalance in order to check if the detected bias is significant, by using the asymptotical distribution of inertia due to T (Estadella et al. 2005) , and by preserving the multivariate nature of data. Further, we propose the use of a clustering procedure as a tool to find groups of comparable units on which estimate local causal effects, and the use of the multivariate test of imbalance as a stopping rule in choosing the best cluster solution set. The method is non parametric, it does not call for modeling the data, based on some underlying theory or assumption about the selection process, but instead it calls for using the existing variability within the data and letting the data to speak. The idea of proposing this multivariate approach to measure selection bias and test balance comes from the consideration that in applied research all aspects of multivariate balance, not represented in the univariate variable- by-variable summaries, are ignored. The first part contains an introduction to evaluation methods as part of public and private decision process and a review of the literature of evaluation methods. The attention is focused on Rubin Potential Outcome Approach, matching methods, and briefly on Heckman’s Selection Model. The second part focuses on some resulting limitations of conventional methods, with particular attention to the problem of how testing in the correct way balancing. The third part contains the original contribution proposed , a simulation study that allows to check the performance of the method for a given dependence setting and an application to a real data set. Finally, we discuss, conclude and explain our future perspectives.
Resumo:
The research for this PhD project consisted in the application of the RFs analysis technique to different data-sets of teleseismic events recorded at temporary and permanent stations located in three distinct study regions: Colli Albani area, Northern Apennines and Southern Apennines. We found some velocity models to interpret the structures in these regions, which possess very different geologic and tectonics characteristics and therefore offer interesting case study to face. In the Colli Albani some of the features evidenced in the RFs are shared by all the analyzed stations: the Moho is almost flat and is located at about 23 km depth, and the presence of a relatively shallow limestone layer is a stable feature; contrariwise there are features which vary from station to station, indicating local complexities. Three seismic stations, close to the central part of the former volcanic edifice, display relevant anisotropic signatures with symmetry axes consistent with the emplacement of the magmatic chamber. Two further anisotropic layers are present at greater depth, in the lower crust and the upper mantle, respectively, with symmetry axes directions related to the evolution of the volcano complex. In Northern Apennines we defined the isotropic structure of the area, finding the depth of the Tyrrhenian (almost 25 km and flat) and Adriatic (40 km and dipping underneath the Apennines crests) Mohos. We determined a zone in which the two Mohos overlap, and identified an anisotropic body in between, involved in the subduction and going down with the Adiratic Moho. We interpreted the downgoing anisotropic layer as generated by post-subduction delamination of the top-slab layer, probably made of metamorphosed crustal rocks caught in the subduction channel and buoyantly rising toward the surface. In the Southern Apennines, we found the Moho depth for 16 seismic stations, and highlighted the presence of an anisotropic layer underneath each station, at about 15-20 km below the whole study area. The moho displays a dome-like geometry, as it is shallow (29 km) in the central part of the study area, whereas it deepens peripherally (down to 45 km); the symmetry axes of anisotropic layer, interpreted as a layer separating the upper and the lower crust, show a moho-related pattern, indicated by the foliation of the layer which is parallel to the Moho trend. Moreover, due to the exceptional seismic event occurred on April 6th next to L’Aquila town, we determined the Vs model for two station located next to the epicenter. An extremely high velocity body is found underneath AQU station at 4-10 km depth, reaching Vs of about 4 km/s, while this body is lacking underneath FAGN station. We compared the presence of this body with other recent works and found an anti-correlation between the high Vs body, the max slip patches and earthquakes distribution. The nature of this body is speculative since such high velocities are consistent with deep crust or upper mantle, but can be interpreted as a as high strength barrier of which the high Vs is a typical connotation.
Resumo:
This thesis introduces new processing techniques for computer-aided interpretation of ultrasound images with the purpose of supporting medical diagnostic. In terms of practical application, the goal of this work is the improvement of current prostate biopsy protocols by providing physicians with a visual map overlaid over ultrasound images marking regions potentially affected by disease. As far as analysis techniques are concerned, the main contributions of this work to the state-of-the-art is the introduction of deconvolution as a pre-processing step in the standard ultrasonic tissue characterization procedure to improve the diagnostic significance of ultrasonic features. This thesis also includes some innovations in ultrasound modeling, in particular the employment of a continuous-time autoregressive moving-average (CARMA) model for ultrasound signals, a new maximum-likelihood CARMA estimator based on exponential splines and the definition of CARMA parameters as new ultrasonic features able to capture scatterers concentration. Finally, concerning the clinical usefulness of the developed techniques, the main contribution of this research is showing, through a study based on medical ground truth, that a reduction in the number of sampled cores in standard prostate biopsy is possible, preserving the same diagnostic power of the current clinical protocol.
Resumo:
The Southern Tyrrhenian subduction system shows a complex interaction among asthenospheric flow, subducting slab and overriding plate. To shed light on the deformations and mechanical properties of the slab and surrounding mantle, I investigated seismic anisotropy and attenuation properties through the subduction region. I used both teleseisms and slab earthquakes, analyzing shear-wave splitting on SKS and S phases, respectively. The fast polarization directions φ, and the delay time, δt, were retrieved using the method of Silver and Chan [1991. SKS and S φ reveal a complex anisotropy pattern across the subduction zone. SKS-rays sample primarily the sub-slab region showing rotation of fast directions following the curved shape of the slab and very strong anisotropy. S-rays sample mainly the slab, showing variable φ and a smaller δt. SKS and S splitting reveals a well developed toroidal flow at SW edge of the slab, while at its NE edge the pattern is not very clear. This suggests that the anisotropy is controlled by the slab rollback, responsible for about 100 km slab parallel φ in the sub-slab mantle. The slab is weakly anisotropic, suggesting the asthenosphere as main source of anisotropy. To investigate the physical properties of the slab and surrounding regions, I analyzed the seismic P and S wave attenuation. By inverting high-quality S-waves t* from slab earthquakes, 3D attenuation models down to 300 km were obtained. Attenuation results image the slab as low-attenuation body, but with heterogeneous QS and QP structure showing spot of high attenuation , between 100-200 km depth, which could be due dehydration associated to the slab metamorphism. A low QS anomaly is present in the mantle wedge beneath the Aeolian volcanic arc and could indicate mantle melting and slab dehydration.
Resumo:
The term "Brain Imaging" identi�es a set of techniques to analyze the structure and/or functional behavior of the brain in normal and/or pathological situations. These techniques are largely used in the study of brain activity. In addition to clinical usage, analysis of brain activity is gaining popularity in others recent �fields, i.e. Brain Computer Interfaces (BCI) and the study of cognitive processes. In this context, usage of classical solutions (e.g. f MRI, PET-CT) could be unfeasible, due to their low temporal resolution, high cost and limited portability. For these reasons alternative low cost techniques are object of research, typically based on simple recording hardware and on intensive data elaboration process. Typical examples are ElectroEncephaloGraphy (EEG) and Electrical Impedance Tomography (EIT), where electric potential at the patient's scalp is recorded by high impedance electrodes. In EEG potentials are directly generated from neuronal activity, while in EIT by the injection of small currents at the scalp. To retrieve meaningful insights on brain activity from measurements, EIT and EEG relies on detailed knowledge of the underlying electrical properties of the body. This is obtained from numerical models of the electric �field distribution therein. The inhomogeneous and anisotropic electric properties of human tissues make accurate modeling and simulation very challenging, leading to a tradeo�ff between physical accuracy and technical feasibility, which currently severely limits the capabilities of these techniques. Moreover elaboration of data recorded requires usage of regularization techniques computationally intensive, which influences the application with heavy temporal constraints (such as BCI). This work focuses on the parallel implementation of a work-flow for EEG and EIT data processing. The resulting software is accelerated using multi-core GPUs, in order to provide solution in reasonable times and address requirements of real-time BCI systems, without over-simplifying the complexity and accuracy of the head models.
Resumo:
This thesis tackles the problem of the automated detection of the atmospheric boundary layer (BL) height, h, from aerosol lidar/ceilometer observations. A new method, the Bayesian Selective Method (BSM), is presented. It implements a Bayesian statistical inference procedure which combines in an statistically optimal way different sources of information. Firstly atmospheric stratification boundaries are located from discontinuities in the ceilometer back-scattered signal. The BSM then identifies the discontinuity edge that has the highest probability to effectively mark the BL height. Information from the contemporaneus physical boundary layer model simulations and a climatological dataset of BL height evolution are combined in the assimilation framework to assist this choice. The BSM algorithm has been tested for four months of continuous ceilometer measurements collected during the BASE:ALFA project and is shown to realistically diagnose the BL depth evolution in many different weather conditions. Then the BASE:ALFA dataset is used to investigate the boundary layer structure in stable conditions. Functions from the Obukhov similarity theory are used as regression curves to fit observed velocity and temperature profiles in the lower half of the stable boundary layer. Surface fluxes of heat and momentum are best-fitting parameters in this exercise and are compared with what measured by a sonic anemometer. The comparison shows remarkable discrepancies, more evident in cases for which the bulk Richardson number turns out to be quite large. This analysis supports earlier results, that surface turbulent fluxes are not the appropriate scaling parameters for profiles of mean quantities in very stable conditions. One of the practical consequences is that boundary layer height diagnostic formulations which mainly rely on surface fluxes are in disagreement to what obtained by inspecting co-located radiosounding profiles.
Resumo:
From the perspective of a new-generation opto-electronic technology based on organic semiconductors, a major objective is to achieve a deep and detailed knowledge of the structure-property relationships, in order to optimize the electronic, optical, and charge transport properties by tuning the chemical-physical characteristics of the compounds. The purpose of this dissertation is to contribute to such understanding, through suitable theoretical and computational studies. Precisely, the structural, electronic, optical, and charge transport characteristics of several promising organic materials recently synthesized are investigated by means of an integrated approach encompassing quantum-chemical calculations, molecular dynamics and kinetic Monte Carlo simulations. Particular care is addressed to the rationalization of optical and charge transport properties in terms of both intra- and intermolecular features. Moreover, a considerable part of this project involves the development of a home-made set of procedures and parts of software code required to assist the modeling of charge transport properties in the framework of the non-adiabatic hopping mechanism applied to organic crystalline materials. As a first part of my investigations, I mainly discuss the optical, electronic, and structural properties of several core-extended rylene derivatives, which can be regarded to as model compounds for graphene nanoribbons. Two families have been studied, consisting in bay-linked perylene bisimide oligomers and N-annulated rylenes. Beside rylene derivatives, my studies also concerned electronic and spectroscopic properties of tetracene diimides, quinoidal oligothiophenes, and oxygen doped picene. As an example of device application, I studied the structural characteristics governing the efficiency of resistive molecular memories based on a derivative of benzoquinone. Finally, as a second part of my investigations, I concentrate on the charge transport properties of perylene bisimides derivatives. Precisely, a comprehensive study of the structural and thermal effects on the charge transport of several core-twisted chlorinated and fluoro-alkylated perylene bisimide n-type semiconductors is presented.
Resumo:
Extrusion is a process used to form long products of constant cross section, from simple billets, with a high variety of shapes. Aluminum alloys are the materials most processed in the extrusion industry due to their deformability and the wide field of applications that range from buildings to aerospace and from design to automotive industries. The diverse applications imply different requirements that can be fulfilled by the wide range of alloys and treatments, that is from critical structural application to high quality surface and aesthetical aspect. Whether one or the other is the critical aspect, they both depend directly from microstructure. The extrusion process is moreover marked by high deformations and complex strain gradients making difficult the control of microstructure evolution that is at present not yet fully achieved. Nevertheless the evolution of Finite Element modeling has reached a maturity and can therefore start to be used as a tool for investigation and prediction of microstructure evolution. This thesis will analyze and model the evolution of microstructure throughout the entire extrusion process for 6XXX series aluminum alloys. Core phase of the work was the development of specific tests to investigate the microstructure evolution and validate the model implemented in a commercial FE code. Along with it two essential activities were carried out for a correct calibration of the model beyond the simple research of contour parameters, thus leading to the understanding and control of both code and process. In this direction activities were also conducted on building critical knowhow on the interpretation of microstructure and extrusion phenomena. It is believed, in fact, that the sole analysis of the microstructure evolution regardless of its relevance in the technological aspects of the process would be of little use for the industry as well as ineffective for the interpretation of the results.
Resumo:
From the late 1980s, the automation of sequencing techniques and the computer spread gave rise to a flourishing number of new molecular structures and sequences and to proliferation of new databases in which to store them. Here are presented three computational approaches able to analyse the massive amount of publicly avalilable data in order to answer to important biological questions. The first strategy studies the incorrect assignment of the first AUG codon in a messenger RNA (mRNA), due to the incomplete determination of its 5' end sequence. An extension of the mRNA 5' coding region was identified in 477 in human loci, out of all human known mRNAs analysed, using an automated expressed sequence tag (EST)-based approach. Proof-of-concept confirmation was obtained by in vitro cloning and sequencing for GNB2L1, QARS and TDP2 and the consequences for the functional studies are discussed. The second approach analyses the codon bias, the phenomenon in which distinct synonymous codons are used with different frequencies, and, following integration with a gene expression profile, estimates the total number of codons present across all the expressed mRNAs (named here "codonome value") in a given biological condition. Systematic analyses across different pathological and normal human tissues and multiple species shows a surprisingly tight correlation between the codon bias and the codonome bias. The third approach is useful to studies the expression of human autism spectrum disorder (ASD) implicated genes. ASD implicated genes sharing microRNA response elements (MREs) for the same microRNA are co-expressed in brain samples from healthy and ASD affected individuals. The different expression of a recently identified long non coding RNA which have four MREs for the same microRNA could disrupt the equilibrium in this network, but further analyses and experiments are needed.
Resumo:
Introgression of domestic cat genes into European wildcat (Felis silvestris silvestris) populations and reduction of wildcats’ range in Europe, leaded by habitat loss and fragmentation, are considered two of the main conservation problems for this endangered feline. This thesis addressed the questions related with the artificial hybridization and populations’ fragmentation, using a conservation genetics perspective. We combined the use of highly polymorphic loci, Bayesian statistical inferences and landscape analyses tools to investigate the origin of the geographic-genetic substructure of European wildcats (Felis silvestris silvestris) in Italy and Europe. The genetic variability of microsatellites evidenced that European wildcat populations currently distributed in Italy differentiated in, and expanded from two distinct glacial refuges during the Last Glacial Maximum. The genetic and geographic substructure detected between the eastern and western sides of the Apennine ridge, resulted by adaptation to specific ecological conditions of the Mediterranean habitats. European wildcat populations in Europe are strongly structured into 5 geographic-genetic macro clusters corresponding to: the Italian peninsular & Sicily; Balkans & north-eastern Italy; Germany eastern; central Europe; and Iberian Peninsula. Central European population might have differentiated in the extra-Mediterranean Würm ice age refuge areas (Northern Alps, Carpathians, and the Bulgarian mountain systems), while the divergence among and within the southern European populations might have resulted by the Pleistocene bio geographical framework of Europe, with three southern refugia localized in the Balkans, Italian Peninsula and Iberia Peninsula. We further combined the use of most informative autosomal SNPs with uniparental markers (mtDNA and Y-linked) for accurately detecting parental genotypes and levels of introgressive hybridization between European wild and domestic cats. A total of 11 hybrids were identified. The presence of domestic mitochondrial haplotypes shared with some wild individuals led us to hypnotize the possibility that ancient introgressive events might have occurred and that further investigation should be recommended.