17 resultados para Tools.
Resumo:
The human movement analysis (HMA) aims to measure the abilities of a subject to stand or to walk. In the field of HMA, tests are daily performed in research laboratories, hospitals and clinics, aiming to diagnose a disease, distinguish between disease entities, monitor the progress of a treatment and predict the outcome of an intervention [Brand and Crowninshield, 1981; Brand, 1987; Baker, 2006]. To achieve these purposes, clinicians and researchers use measurement devices, like force platforms, stereophotogrammetric systems, accelerometers, baropodometric insoles, etc. This thesis focus on the force platform (FP) and in particular on the quality assessment of the FP data. The principal objective of our work was the design and the experimental validation of a portable system for the in situ calibration of FPs. The thesis is structured as follows: Chapter 1. Description of the physical principles used for the functioning of a FP: how these principles are used to create force transducers, such as strain gauges and piezoelectrics transducers. Then, description of the two category of FPs, three- and six-component, the signals acquisition (hardware structure), and the signals calibration. Finally, a brief description of the use of FPs in HMA, for balance or gait analysis. Chapter 2. Description of the inverse dynamics, the most common method used in the field of HMA. This method uses the signals measured by a FP to estimate kinetic quantities, such as joint forces and moments. The measures of these variables can not be taken directly, unless very invasive techniques; consequently these variables can only be estimated using indirect techniques, as the inverse dynamics. Finally, a brief description of the sources of error, present in the gait analysis. Chapter 3. State of the art in the FP calibration. The selected literature is divided in sections, each section describes: systems for the periodic control of the FP accuracy; systems for the error reduction in the FP signals; systems and procedures for the construction of a FP. In particular is detailed described a calibration system designed by our group, based on the theoretical method proposed by ?. This system was the “starting point” for the new system presented in this thesis. Chapter 4. Description of the new system, divided in its parts: 1) the algorithm; 2) the device; and 3) the calibration procedure, for the correct performing of the calibration process. The algorithm characteristics were optimized by a simulation approach, the results are here presented. In addiction, the different versions of the device are described. Chapter 5. Experimental validation of the new system, achieved by testing it on 4 commercial FPs. The effectiveness of the calibration was verified by measuring, before and after calibration, the accuracy of the FPs in measuring the center of pressure of an applied force. The new system can estimate local and global calibration matrices; by local and global calibration matrices, the non–linearity of the FPs was quantified and locally compensated. Further, a non–linear calibration is proposed. This calibration compensates the non– linear effect in the FP functioning, due to the bending of its upper plate. The experimental results are presented. Chapter 6. Influence of the FP calibration on the estimation of kinetic quantities, with the inverse dynamics approach. Chapter 7. The conclusions of this thesis are presented: need of a calibration of FPs and consequential enhancement in the kinetic data quality. Appendix: Calibration of the LC used in the presented system. Different calibration set–up of a 3D force transducer are presented, and is proposed the optimal set–up, with particular attention to the compensation of non–linearities. The optimal set–up is verified by experimental results.
Resumo:
Many new Escherichia coli outer membrane proteins have recently been identified by proteomics techniques. However, poorly expressed proteins and proteins expressed only under certain conditions may escape detection when wild-type cells are grown under standard conditions. Here, we have taken a complementary approach where candidate outer membrane proteins have been identified by bioinformatics prediction, cloned and overexpressed, and finally localized by cell fractionation experiments. Out of eight predicted outer membrane proteins, we have confirmed the outer membrane localization for five—YftM, YaiO, YfaZ, CsgF, and YliI—and also provide preliminary data indicating that a sixth—YfaL—may be an outer membrane autotransporter.
Resumo:
The continuous increase of genome sequencing projects produced a huge amount of data in the last 10 years: currently more than 600 prokaryotic and 80 eukaryotic genomes are fully sequenced and publically available. However the sole sequencing process of a genome is able to determine just raw nucleotide sequences. This is only the first step of the genome annotation process that will deal with the issue of assigning biological information to each sequence. The annotation process is done at each different level of the biological information processing mechanism, from DNA to protein, and cannot be accomplished only by in vitro analysis procedures resulting extremely expensive and time consuming when applied at a this large scale level. Thus, in silico methods need to be used to accomplish the task. The aim of this work was the implementation of predictive computational methods to allow a fast, reliable, and automated annotation of genomes and proteins starting from aminoacidic sequences. The first part of the work was focused on the implementation of a new machine learning based method for the prediction of the subcellular localization of soluble eukaryotic proteins. The method is called BaCelLo, and was developed in 2006. The main peculiarity of the method is to be independent from biases present in the training dataset, which causes the over‐prediction of the most represented examples in all the other available predictors developed so far. This important result was achieved by a modification, made by myself, to the standard Support Vector Machine (SVM) algorithm with the creation of the so called Balanced SVM. BaCelLo is able to predict the most important subcellular localizations in eukaryotic cells and three, kingdom‐specific, predictors were implemented. In two extensive comparisons, carried out in 2006 and 2008, BaCelLo reported to outperform all the currently available state‐of‐the‐art methods for this prediction task. BaCelLo was subsequently used to completely annotate 5 eukaryotic genomes, by integrating it in a pipeline of predictors developed at the Bologna Biocomputing group by Dr. Pier Luigi Martelli and Dr. Piero Fariselli. An online database, called eSLDB, was developed by integrating, for each aminoacidic sequence extracted from the genome, the predicted subcellular localization merged with experimental and similarity‐based annotations. In the second part of the work a new, machine learning based, method was implemented for the prediction of GPI‐anchored proteins. Basically the method is able to efficiently predict from the raw aminoacidic sequence both the presence of the GPI‐anchor (by means of an SVM), and the position in the sequence of the post‐translational modification event, the so called ω‐site (by means of an Hidden Markov Model (HMM)). The method is called GPIPE and reported to greatly enhance the prediction performances of GPI‐anchored proteins over all the previously developed methods. GPIPE was able to predict up to 88% of the experimentally annotated GPI‐anchored proteins by maintaining a rate of false positive prediction as low as 0.1%. GPIPE was used to completely annotate 81 eukaryotic genomes, and more than 15000 putative GPI‐anchored proteins were predicted, 561 of which are found in H. sapiens. In average 1% of a proteome is predicted as GPI‐anchored. A statistical analysis was performed onto the composition of the regions surrounding the ω‐site that allowed the definition of specific aminoacidic abundances in the different considered regions. Furthermore the hypothesis that compositional biases are present among the four major eukaryotic kingdoms, proposed in literature, was tested and rejected. All the developed predictors and databases are freely available at: BaCelLo http://gpcr.biocomp.unibo.it/bacello eSLDB http://gpcr.biocomp.unibo.it/esldb GPIPE http://gpcr.biocomp.unibo.it/gpipe
Resumo:
The dolphin (Tursiops truncatus) is a mammal that is adapted to life in a totally aquatic environment. Despite the popularity and even iconic status of the dolphin, our knowledge of its physiology, its unique adaptations and the effects on it of environmental stressors are limited. One approach to improve this limited understanding is the implementation of established cellular and molecular methods to provide sensitive and insightful information for dolphin biology. We initiated our studies with the analysis of wild dolphin peripheral blood leukocytes, which have the potential to be informative of the animal’s global immune status. Transcriptomic profiles from almost 200 individual samples were analyzed using a newly developed species-specific microarray to assess its value as a prognostic and diagnostic tool. Functional genomics analyses were informative of stress-induced gene expression profiles and also of geographical location specific transcriptomic signatures, determined by the interaction of genetic, disease and environmental factors. We have developed quantitative metrics to unambiguously characterize the phenotypic properties of dolphin cells in culture. These quantitative metrics can provide identifiable characteristics and baseline data which will enable identification of changes in the cells due to time in culture. We have also developed a novel protocol to isolate primary cultures from cryopreserved tissue of stranded marine mammals, establishing a tissue (and cell) biorepository, a new approach that can provide a solution to the limited availability of samples. The work presented represents the development and application of tools for the study of the biology, health and physiology of the dolphin, and establishes their relevance for future studies of the impact on the dolphin of environmental infection and stress.
Resumo:
The subject of this thesis is multicolour bioluminescence analysis and how it can provide new tools for drug discovery and development.The mechanism of color tuning in bioluminescent reactions is not fully understood yet but it is object of intense research and several hypothesis have been generated. In the past decade key residues of the active site of the enzyme or in the surface surrounding the active site have been identified as responsible of different color emission. Anyway since bioluminescence reaction is strictly dependent from the interaction between the enzyme and its substrate D-luciferin, modification of the substrate can lead to a different emission spectrum too. In the recent years firefly luciferase and other luciferases underwent mutagenesis in order to obtain mutants with different emission characteristics. Thanks to these new discoveries in the bioluminescence field multicolour luciferases can be nowadays employed in bioanalysis for assay developments and imaging purposes. The use of multicolor bioluminescent enzymes expanded the potential of a range of application in vitro and in vivo. Multiple analysis and more information can be obtained from the same analytical session saving cost and time. This thesis focuses on several application of multicolour bioluminescence for high-throughput screening and in vivo imaging. Multicolor luciferases can be employed as new tools for drug discovery and developments and some examples are provided in the different chapters. New red codon optimized luciferase have been demonstrated to be improved tools for bioluminescence imaging in small animal and the possibility to combine red and green luciferases for BLI has been achieved even if some aspects of the methodology remain challenging and need further improvement. In vivo Bioluminescence imaging has known a rapid progress since its first application no more than 15 years ago. It is becoming an indispensable tool in pharmacological research. At the same time the development of more sensitive and implemented microscopes and low-light imager for a better visualization and quantification of multicolor signals would boost the research and the discoveries in life sciences in general and in drug discovery and development in particular.
Resumo:
Proper hazard identification has become progressively more difficult to achieve, as witnessed by several major accidents that took place in Europe, such as the Ammonium Nitrate explosion at Toulouse (2001) and the vapour cloud explosion at Buncefield (2005), whose accident scenarios were not considered by their site safety case. Furthermore, the rapid renewal in the industrial technology has brought about the need to upgrade hazard identification methodologies. Accident scenarios of emerging technologies, which are not still properly identified, may remain unidentified until they take place for the first time. The consideration of atypical scenarios deviating from normal expectations of unwanted events or worst case reference scenarios is thus extremely challenging. A specific method named Dynamic Procedure for Atypical Scenarios Identification (DyPASI) was developed as a complementary tool to bow-tie identification techniques. The main aim of the methodology is to provide an easier but comprehensive hazard identification of the industrial process analysed, by systematizing information from early signals of risk related to past events, near misses and inherent studies. DyPASI was validated on the two examples of new and emerging technologies: Liquefied Natural Gas regasification and Carbon Capture and Storage. The study broadened the knowledge on the related emerging risks and, at the same time, demonstrated that DyPASI is a valuable tool to obtain a complete and updated overview of potential hazards. Moreover, in order to tackle underlying accident causes of atypical events, three methods for the development of early warning indicators were assessed: the Resilience-based Early Warning Indicator (REWI) method, the Dual Assurance method and the Emerging Risk Key Performance Indicator method. REWI was found to be the most complementary and effective of the three, demonstrating that its synergy with DyPASI would be an adequate strategy to improve hazard identification methodologies towards the capture of atypical accident scenarios.
Resumo:
The research presented in my PhD thesis is part of a wider European project, FishPopTrace, focused on traceability of fish populations and products. My work was aimed at developing and analyzing novel genetic tools for a widely distributed marine fish species, the European hake (Merluccius merluccius), in order to investigate population genetic structure and explore potential applications to traceability scenarios. A total of 395 SNPs (Single Nucleotide Polymorphisms) were discovered from a massive collection of Expressed Sequence Tags, obtained by high-throughput sequencing, and validated on 19 geographic samples from Atlantic and Mediterranean. Genome-scan approaches were applied to identify polymorphisms on genes potentially under divergent selection (outlier SNPs), showing higher genetic differentiation among populations respect to the average observed across loci. Comparative analysis on population structure were carried out on putative neutral and outlier loci at wide (Atlantic and Mediterranean samples) and regional (samples within each basin) spatial scales, to disentangle the effects of demographic and adaptive evolutionary forces on European hake populations genetic structure. Results demonstrated the potential of outlier loci to unveil fine scale genetic structure, possibly identifying locally adapted populations, despite the weak signal showed from putative neutral SNPs. The application of outlier SNPs within the framework of fishery resources management was also explored. A minimum panel of SNP markers showing maximum discriminatory power was selected and applied to a traceability scenario aiming at identifying the basin (and hence the stock) of origin, Atlantic or Mediterranean, of individual fish. This case study illustrates how molecular analytical technologies have operational potential in real-world contexts, and more specifically, potential to support fisheries control and enforcement and fish and fish product traceability.
Resumo:
dall'avvento della liberalizzazione, aeroporti e vettori hanno vissuto cambiamenti. Il maggior miglioramneto nella gestione degli aeroporti è una gestione più commerciale ed efficiente. Le forme di regolazione economica e le caratteristiche della gestione manageriale sono state indagate. Dodici paesi sono stati scelti per indagare la situazione del trasporto aereo mondiale, fra questi sia paesi con un sistema maturo sia paesi emergenti. La distribuzione del traffico è stata analizzata con l'indice HHI per evidenziare aeroporti con concentrazione maggiore di 0,25 (in accordo con la normativa statunitense); il sistema aeroportuale è stato analizzato con l'indice di Gini e con l'indice di dominanza. Infine, la teoria dei giochi si è dimostrata un valido supporto per studiare il mercato del trasporto aereo anche con l'uso di giochi di tipo DP
Resumo:
In recent years, the use of Reverse Engineering systems has got a considerable interest for a wide number of applications. Therefore, many research activities are focused on accuracy and precision of the acquired data and post processing phase improvements. In this context, this PhD Thesis deals with the definition of two novel methods for data post processing and data fusion between physical and geometrical information. In particular a technique has been defined for error definition in 3D points’ coordinates acquired by an optical triangulation laser scanner, with the aim to identify adequate correction arrays to apply under different acquisition parameters and operative conditions. Systematic error in data acquired is thus compensated, in order to increase accuracy value. Moreover, the definition of a 3D thermogram is examined. Object geometrical information and its thermal properties, coming from a thermographic inspection, are combined in order to have a temperature value for each recognizable point. Data acquired by an optical triangulation laser scanner are also used to normalize temperature values and make thermal data independent from thermal-camera point of view.
Resumo:
The discovery of the Cosmic Microwave Background (CMB) radiation in 1965 is one of the fundamental milestones supporting the Big Bang theory. The CMB is one of the most important source of information in cosmology. The excellent accuracy of the recent CMB data of WMAP and Planck satellites confirmed the validity of the standard cosmological model and set a new challenge for the data analysis processes and their interpretation. In this thesis we deal with several aspects and useful tools of the data analysis. We focus on their optimization in order to have a complete exploitation of the Planck data and contribute to the final published results. The issues investigated are: the change of coordinates of CMB maps using the HEALPix package, the problem of the aliasing effect in the generation of low resolution maps, the comparison of the Angular Power Spectrum (APS) extraction performances of the optimal QML method, implemented in the code called BolPol, and the pseudo-Cl method, implemented in Cromaster. The QML method has been then applied to the Planck data at large angular scales to extract the CMB APS. The same method has been applied also to analyze the TT parity and the Low Variance anomalies in the Planck maps, showing a consistent deviation from the standard cosmological model, the possible origins for this results have been discussed. The Cromaster code instead has been applied to the 408 MHz and 1.42 GHz surveys focusing on the analysis of the APS of selected regions of the synchrotron emission. The new generation of CMB experiments will be dedicated to polarization measurements, for which are necessary high accuracy devices for separating the polarizations. Here a new technology, called Photonic Crystals, is exploited to develop a new polarization splitter device and its performances are compared to the devices used nowadays.
Resumo:
From the late 1980s, the automation of sequencing techniques and the computer spread gave rise to a flourishing number of new molecular structures and sequences and to proliferation of new databases in which to store them. Here are presented three computational approaches able to analyse the massive amount of publicly avalilable data in order to answer to important biological questions. The first strategy studies the incorrect assignment of the first AUG codon in a messenger RNA (mRNA), due to the incomplete determination of its 5' end sequence. An extension of the mRNA 5' coding region was identified in 477 in human loci, out of all human known mRNAs analysed, using an automated expressed sequence tag (EST)-based approach. Proof-of-concept confirmation was obtained by in vitro cloning and sequencing for GNB2L1, QARS and TDP2 and the consequences for the functional studies are discussed. The second approach analyses the codon bias, the phenomenon in which distinct synonymous codons are used with different frequencies, and, following integration with a gene expression profile, estimates the total number of codons present across all the expressed mRNAs (named here "codonome value") in a given biological condition. Systematic analyses across different pathological and normal human tissues and multiple species shows a surprisingly tight correlation between the codon bias and the codonome bias. The third approach is useful to studies the expression of human autism spectrum disorder (ASD) implicated genes. ASD implicated genes sharing microRNA response elements (MREs) for the same microRNA are co-expressed in brain samples from healthy and ASD affected individuals. The different expression of a recently identified long non coding RNA which have four MREs for the same microRNA could disrupt the equilibrium in this network, but further analyses and experiments are needed.
Resumo:
The present thesis focuses on the problem of robust output regulation for minimum phase nonlinear systems by means of identification techniques. Given a controlled plant and an exosystem (an autonomous system that generates eventual references or disturbances), the control goal is to design a proper regulator able to process the only measure available, i.e the error/output variable, in order to make it asymptotically vanishing. In this context, such a regulator can be designed following the well known “internal model principle” that states how it is possible to achieve the regulation objective by embedding a replica of the exosystem model in the controller structure. The main problem shows up when the exosystem model is affected by parametric or structural uncertainties, in this case, it is not possible to reproduce the exact behavior of the exogenous system in the regulator and then, it is not possible to achieve the control goal. In this work, the idea is to find a solution to the problem trying to develop a general framework in which coexist both a standard regulator and an estimator able to guarantee (when possible) the best estimate of all uncertainties present in the exosystem in order to give “robustness” to the overall control loop.
Resumo:
Falls are common and burdensome accidents among the elderly. About one third of the population aged 65 years or more experience at least one fall each year. Fall risk assessment is believed to be beneficial for fall prevention. This thesis is about prognostic tools for falls for community-dwelling older adults. We provide an overview of the state of the art. We then take different approaches: we propose a theoretical probabilistic model to investigate some properties of prognostic tools for falls; we present a tool whose parameters were derived from data of the literature; we train and test a data-driven prognostic tool. Finally, we present some preliminary results on prediction of falls through features extracted from wearable inertial sensors. Heterogeneity in validation results are expected from theoretical considerations and are observed from empirical data. Differences in studies design hinder comparability and collaborative research. According to the multifactorial etiology of falls, assessment on multiple risk factors is needed in order to achieve good predictive accuracy.
Resumo:
With this work I elucidated new and unexpected mechanisms of two strong and highly specific transcription inhibitors: Triptolide and Campthotecin. Triptolide (TPL) is a diterpene epoxide derived from the Chinese plant Trypterigium Wilfoordii Hook F. TPL inhibits the ATPase activity of XPB, a subunit of the general transcription factor TFIIH. In this thesis I found that degradation of Rbp1 (the largest subunit of RNA Polymerase II) caused by TPL treatments, is preceded by an hyperphosphorylation event at serine 5 of the carboxy-terminal domain (CTD) of Rbp1. This event is concomitant with a block of RNA Polymerase II at promoters of active genes. The enzyme responsible for Ser5 hyperphosphorylation event is CDK7. Notably, CDK7 downregulation rescued both Ser5 hyperphosphorylation and Rbp1 degradation triggered by TPL. Camptothecin (CPT), derived from the plant Camptotheca acuminata, specifically inhibits topoisomerase 1 (Top1). We first found that CPT induced antisense transcription at divergent CpG islands promoter. Interestingly, by immunofluorescence experiments, CPT was found to induce a burst of R loop structures (DNA/RNA hybrids) at nucleoli and mitochondria. We then decided to investigate the role of Top1 in R loop homeostasis through a short interfering RNA approach (RNAi). Using DNA/RNA immunoprecipitation techniques coupled to NGS I found that Top1 depletion induces an increase of R loops at a genome-wide level. We found that such increase occurs on the entire gene body. At a subset of loci R loops resulted particularly stressed after Top1 depletion: some of these genes showed the formation of new R loops structures, whereas other loci showed a reduction of R loops. Interestingly we found that new peaks usually appear at tandem or divergent genes in the entire gene body, while losses of R loop peaks seems to be a feature specific of 3’ end regions of convergent genes.