15 resultados para real-time quantitative PCR
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
1) Background: The most common methods to evaluate clarithromycin resistance is the E-Test, but is time consuming. Resistance of Hp to clarithromycin is due to point mutations in the 23S rRNA. Eight different point mutations have been related to CH resistance, but the large majority of the clarithromycin resistance depends on three point mutations (A2142C, A2142G and A2143G). A novel PCR-based clarithromycin resistance assays, even on paraffin-embedded biopsy specimens, have been proposed. Aims: to assess clarithromycin resistance detecting these point mutation (E-Test as a reference method);secondly, to investigate relation with MIC values. Methods: Paraffin-embedded biopsies of patients Hp-positive were retrieved. The A2142C, A2142G and A2143G point mutations were detected by molecular analysis after DNA extraction by using a TaqMan real-time PCR. Results: The study enrolled 86 patients: 46 resistant and 40 sensible to CH. The Hp status was evaluated at endoscopy, by rapid urease test (RUT), histology and hp culture. According to real-time PCR, 37 specimens were susceptible to clarithromycin (wild type dna) whilst the remaining 49 specimens (57%) were resistant. A2143G is the most frequent mutation. A2142C always express a resistant phenotype and A2142G leads to a resitant phenotype only if homozigous. 2) Background: Colonoscopy work-load for endoscopy services is increasing due to colorectal cancer prevention. We tested a combination of faecal tests to improve accuracy and prioritize the access to colonoscopy. Methods: we tested a combination of fecal tests (FOBT, M2-PK and calprotectin) in a group of 280 patients requiring colonoscopy. Results: 47 patients had CRC and 85 had advanced adenoma/s at colonoscopy/histology. In case of single test, for CRC detection FOBT was the test with the highest specificity and PPV, M2-PK had the highest sensitivity and higher NPV. Combination was more interesting in term of PPV. And the best combination of tests was i-FOBT + M2-PK.
Resumo:
Autism is a neurodevelpmental disorder characterized by impaired verbal communication, limited reciprocal social interaction, restricted interests and repetitive behaviours. Twin and family studies indicate a large genetic contribution to ASDs (Autism Spectrum Disorders). During my Ph.D. I have been involved in several projects in which I used different genetic approaches in order to identify susceptibility genes in autism on chromosomes 2, 7 and X: 1)High-density SNP association and CNV analysis of two Autism Susceptibility Loci. The International Molecular Genetic Study of Autism Consortium (IMGSAC) previously identified linkage loci on chromosomes 7 and 2, termed AUTS1 and AUTS5, respectively. In this study, we evaluated the patterns of linkage disequilibrium (LD) and the distribution of haplotype blocks, utilising data from the HapMap project, across the two strongest peaks of linkage on chromosome 2 and 7. More than 3000 SNPs have been selected in each locus in all known genes, as well as SNPs in non-genic highly conserved sequences. All markers have been genotyped to perform a high-density association analysis and to explore copy number variation within these regions. The study sample consisted of 127 and 126 multiplex families, showing linkage to the AUTS1 and AUTS5 regions, respectively, and 188 gender-matched controls. Association and CNV analysis implicated several new genes, including IMMP2L and DOCK4 on chromosome 7 and ZNF533 and NOSTRIN on the chromosome 2. Particularly, my contribution to this project focused on the characterization of the best candidate gene in each locus: On the AUTS5 locus I carried out a transcript study of ZNF533 in different human tissues to verify which isoforms and start exons were expressed. High transcript variability and a new exon, never described before, has been identified in this analysis. Furthermore, I selected 31 probands for the risk haplotype and performed a mutation screen of all known exons in order to identify novel coding variants associated to autism. On the AUTS1 locus a duplication was detected in one multiplex family that was transmitted from father to an affected son. This duplication interrupts two genes: IMMP2L and DOCK4 and warranted further analysis. Thus, I performed a screening of the cohort of IMGSAC collection (285 multiplex families), using a QMPSF assay (Quantitative Multiplex PCR of Short fluorescent Fragments) to analyse if CNVs in this genic region segregate with autism phenotype and compare their frequency with a sample of 475 UK controls. Evidence for a role of DOCK4 in autism susceptibility was supported by independent replication of association at rs2217262 and the finding of a deletion segregating in a sib-pair family. 2)Analysis of X chromosome inactivation. Skewed X chromosome inactivation (XCI) is observed in females carrying gene mutations involved in several X-linked syndromes. We aimed to estimate the role of X-linked genes in ASD susceptibility by ascertaining the XCI pattern in a sample of 543 informative mothers of children with ASD and in a sample of 164 affected girls. The study sample included families from different european consortia. I analysed the XCI inactivation pattern in a sample of italian mothers from singletons families with ASD and also a control groups (144 adult females and 40 young females). We observed no significant excess of skewed XCI in families with ASD. Interestingly, two mothers and one girl carrying known mutations in X-linked genes (NLGN3, ATRX, MECP2) showed highly skewed XCI, suggesting that ascertainment of XCI could reveal families with X-linked mutations. Linkage analysis was carried out in the subgroup of multiplex families with skewed XCI (≥80:20) and a modest increased allele sharing was obtained in the Xq27-Xq28 region, with a peak Z score of 1.75 close to rs719489. In this region FMR1 and MECP2 have been associated in some cases with austim and therefore represent candidates for the disorder. I performed a mutation screen of MECP2 in 33 unrelated probands from IMGSAC and italian families, showing XCI skewness. Recently, Xq28 duplications including MECP2, have been identified in families with MR, with asymptomatic carrier females showing extreme (>85%) skewing of XCI. For these reason I used the sample of probands from X-skewed families to perform CNV analysis by Real-time quantitative PCR. No duplications have been found in our sample. I have also confirmed all data using as alternative method the MLPA assay (Multiplex Ligation dependent Probe Amplification). 3)ASMT as functional candidate gene for autism. Recently, a possible involvement of the acetylserotonin O-methyltransferase (ASMT) gene in susceptibility to ASDs has been reported: mutation screening of the ASMT gene in 250 individuals from the PARIS collection revealed several rare variants with a likely functional role; Moreover, significant association was reported for two SNPs (rs4446909 and rs5989681) located in one of the two alternative promoters of the gene. To further investigate these findings, I carried out a replication study using a sample of 263 affected individuals from the IMGSAC collection and 390 control individuals. Several rare mutations were identified, including the splice site mutation IVS5+2T>C and the L326F substitution previously reported by Melke et al (2007), but the same rare variants have been found also in control individuals in our study. Interestingly, a new R319X stop mutation was found in a single autism proband of Italian origin and is absent from the entire control sample. Furthermore, no replication has been found in our case-control study typing the SNPs on the ASMT promoter B.
Resumo:
Background. Outcome of elderly acute myeloid leukemia (AML) patients is dismal. Targeted-therapies might improve current results by overcoming drug-resistance and reducing toxicity. Aim. We conduced a phase II study aiming to assess efficacy and toxicity of Tipifarnib (Zarnestra®) and Bortezomib (Velcade®) association in AML patients >18 years, unfit for conventional therapy, or >60 years, in relapse. Furthermore, we aimed to evaluated the predictive value of the RASGRP1/APTX ratio, which was previously found to be associated to treatment sensitivity in patients receiving Zarnestra alone. Methods. Velcade (1.0 mg/m2) was administered as weekly infusion for 3 weeks (days 1, 8, 15). Zarnestra was administered at dose of 300-600 mg BID for 21 consecutive days. Real-time quantitative-PCR (q-PCR) was used for RASGRP1/APTX quantification. Results. 50 patients were enrolled. Median age was 71 years (56-89). 3 patients achieved complete remission (CR) and 1 partial response (PR). 2 patients obtained an hematological improvement (HI), and 3 died during marrow aplasia. 10 had progressive disease (PD) and the remaining showed stable disease (SD). RASGRP1/APTX was evaluated before treatment initiation on bone marrow (BM) and/or peripheral blood (PB). The median RASGRP/APTX value on BM was higher in responder (R) patients than in non responders (NR) ones, respectively (p=0.006). Interestingly, no marrow responses were recorded in patients with BM RASGRP1/APTX ratio <12, while the response rate was 50% in patients with ratio >12. Toxicity was overall mild, the most common being febrile neutropenia. Conclusion. We conclude that the clinical efficacy of the combination Zarnestra-Velcade was similar to what reported for Zarnestra alone. However we could confirm that the RASGPR1/APTX level is an effective predictor of response. Though higher RASGRP1/APTX is relatively rare (~10% of cases), Zarnestra (±Velcade) may represent an important option in a subset of high risk/frail AML patients.
Resumo:
Motion control is a sub-field of automation, in which the position and/or velocity of machines are controlled using some type of device. In motion control the position, velocity, force, pressure, etc., profiles are designed in such a way that the different mechanical parts work as an harmonious whole in which a perfect synchronization must be achieved. The real-time exchange of information in the distributed system that is nowadays an industrial plant plays an important role in order to achieve always better performance, better effectiveness and better safety. The network for connecting field devices such as sensors, actuators, field controllers such as PLCs, regulators, drive controller etc., and man-machine interfaces is commonly called fieldbus. Since the motion transmission is now task of the communication system, and not more of kinematic chains as in the past, the communication protocol must assure that the desired profiles, and their properties, are correctly transmitted to the axes then reproduced or else the synchronization among the different parts is lost with all the resulting consequences. In this thesis, the problem of trajectory reconstruction in the case of an event-triggered communication system is faced. The most important feature that a real-time communication system must have is the preservation of the following temporal and spatial properties: absolute temporal consistency, relative temporal consistency, spatial consistency. Starting from the basic system composed by one master and one slave and passing through systems made up by many slaves and one master or many masters and one slave, the problems in the profile reconstruction and temporal properties preservation, and subsequently the synchronization of different profiles in network adopting an event-triggered communication system, have been shown. These networks are characterized by the fact that a common knowledge of the global time is not available. Therefore they are non-deterministic networks. Each topology is analyzed and the proposed solution based on phase-locked loops adopted for the basic master-slave case has been improved to face with the other configurations.
Resumo:
Bifidobacteria constitute up to 3% of the total microbiota and represent one of the most important healthpromoting bacterial groups of the human intestinal microflora. The presence of Bifidobacterium in the human gastrointestinal tract has been directly related to several health-promoting activities; however, to date, no information about the specific mechanisms of interaction with the host is available. The first health-promoting activities studied in these job was the oxalate-degrading activity. Oxalic acid occurs extensively in nature and plays diverse roles, especially in pathological processes. Due to its highly oxidizing effects, hyper absorption or abnormal synthesis of oxalate can cause serious acute disorders in mammals and be lethal in extreme cases. Intestinal oxalate-degrading bacteria could therefore be pivotal in maintaining oxalate homeostasis, reducing the risk of kidney stone development. In this study, the oxalate-degrading activity of 14 bifidobacterial strains was measured by a capillary electrophoresis technique. The oxc gene, encoding oxalyl-CoA decarboxylase, a key enzyme in oxalate catabolism, was isolated by probing a genomic library of B. animalis subsp. lactis BI07, which was one of the most active strains in the preliminary screening. The genetic and transcriptional organization of oxc flanking regions was determined, unravelling the presence of other two independently transcribed open reading frames, potentially responsible for B. animalis subsp. lactis ability to degrade oxalate. Transcriptional analysis, using real-time quantitative reverse transcription PCR, revealed that these genes were highly induced in cells first adapted to subinhibitory concentrations of oxalate and then exposed to pH 4.5. Acidic conditions were also a prerequisite for a significant oxalate degradation rate, which dramatically increased in oxalate pre-adapted cells, as demonstrated in fermentation experiments with different pH-controlled batch cultures. These findings provide new insights in the characterization of oxalate-degrading probiotic bacteria and may support the use of B. animalis subsp. lactis as a promising adjunct for the prophylaxis and management of oxalate-related kidney disease. In order to provide some insight into the molecular mechanisms involved in the interaction with the host, in the second part of the job, we investigated whether Bifidobacterium was able to capture human plasminogen on the cell surface. The binding of human plasminogen to Bifidobacterium was dependent on lysine residues of surface protein receptors. By using a proteomic approach, we identified six putative plasminogen-binding proteins in the cell wall fraction of three strain of Bifidobacterium. The data suggest that plasminogen binding to Bifidobactrium is due to the concerted action of a number of proteins located on the bacterial cell surface, some of which are highly conserved cytoplasmic proteins which have other essential cellular functions. Our findings represent a step forward in understanding the mechanisms involved in the Bifidobacterium-host interaction. In these job w studied a new approach based on to MALDI-TOF MS to measure the interaction between entire bacterial cells and host molecular target. MALDI-TOF (Matrix Assisted Laser Desorption Ionization-Time of Flight)—mass spectrometry has been applied, for the first time, in the investigation of whole Bifidobacterium cells-host target proteins interaction. In particular, by means of this technique, a dose dependent human plasminogen-binding activity has been shown for Bifidobacterium. The involvement of lysine binding sites on the bacterial cell surface has been proved. The obtained result was found to be consistent with that from well-established standard methodologies, thus the proposed MALDI-TOF approach has the potential to enter as a fast alternative method in the field of biorecognition studies involving in bacterial cells and proteins of human origin.
Resumo:
Organic electronics has grown enormously during the last decades driven by the encouraging results and the potentiality of these materials for allowing innovative applications, such as flexible-large-area displays, low-cost printable circuits, plastic solar cells and lab-on-a-chip devices. Moreover, their possible field of applications reaches from medicine, biotechnology, process control and environmental monitoring to defense and security requirements. However, a large number of questions regarding the mechanism of device operation remain unanswered. Along the most significant is the charge carrier transport in organic semiconductors, which is not yet well understood. Other example is the correlation between the morphology and the electrical response. Even if it is recognized that growth mode plays a crucial role into the performance of devices, it has not been exhaustively investigated. The main goal of this thesis was the finding of a correlation between growth modes, electrical properties and morphology in organic thin-film transistors (OTFTs). In order to study the thickness dependence of electrical performance in organic ultra-thin-film transistors, we have designed and developed a home-built experimental setup for performing real-time electrical monitoring and post-growth in situ electrical characterization techniques. We have grown pentacene TFTs under high vacuum conditions, varying systematically the deposition rate at a fixed room temperature. The drain source current IDS and the gate source current IGS were monitored in real-time; while a complete post-growth in situ electrical characterization was carried out. At the end, an ex situ morphological investigation was performed by using the atomic force microscope (AFM). In this work, we present the correlation for pentacene TFTs between growth conditions, Debye length and morphology (through the correlation length parameter). We have demonstrated that there is a layered charge carriers distribution, which is strongly dependent of the growth mode (i.e. rate deposition for a fixed temperature), leading to a variation of the conduction channel from 2 to 7 monolayers (MLs). We conciliate earlier reported results that were apparently contradictory. Our results made evident the necessity of reconsidering the concept of Debye length in a layered low-dimensional device. Additionally, we introduce by the first time a breakthrough technique. This technique makes evident the percolation of the first MLs on pentacene TFTs by monitoring the IGS in real-time, correlating morphological phenomena with the device electrical response. The present thesis is organized in the following five chapters. Chapter 1 makes an introduction to the organic electronics, illustrating the operation principle of TFTs. Chapter 2 presents the organic growth from theoretical and experimental points of view. The second part of this chapter presents the electrical characterization of OTFTs and the typical performance of pentacene devices is shown. In addition, we introduce a correcting technique for the reconstruction of measurements hampered by leakage current. In chapter 3, we describe in details the design and operation of our innovative home-built experimental setup for performing real-time and in situ electrical measurements. Some preliminary results and the breakthrough technique for correlating morphological and electrical changes are presented. Chapter 4 meets the most important results obtained in real-time and in situ conditions, which correlate growth conditions, electrical properties and morphology of pentacene TFTs. In chapter 5 we describe applicative experiments where the electrical performance of pentacene TFTs has been investigated in ambient conditions, in contact to water or aqueous solutions and, finally, in the detection of DNA concentration as label-free sensor, within the biosensing framework.
Resumo:
The hydrologic risk (and the hydro-geologic one, closely related to it) is, and has always been, a very relevant issue, due to the severe consequences that may be provoked by a flooding or by waters in general in terms of human and economic losses. Floods are natural phenomena, often catastrophic, and cannot be avoided, but their damages can be reduced if they are predicted sufficiently in advance. For this reason, the flood forecasting plays an essential role in the hydro-geological and hydrological risk prevention. Thanks to the development of sophisticated meteorological, hydrologic and hydraulic models, in recent decades the flood forecasting has made a significant progress, nonetheless, models are imperfect, which means that we are still left with a residual uncertainty on what will actually happen. In this thesis, this type of uncertainty is what will be discussed and analyzed. In operational problems, it is possible to affirm that the ultimate aim of forecasting systems is not to reproduce the river behavior, but this is only a means through which reducing the uncertainty associated to what will happen as a consequence of a precipitation event. In other words, the main objective is to assess whether or not preventive interventions should be adopted and which operational strategy may represent the best option. The main problem for a decision maker is to interpret model results and translate them into an effective intervention strategy. To make this possible, it is necessary to clearly define what is meant by uncertainty, since in the literature confusion is often made on this issue. Therefore, the first objective of this thesis is to clarify this concept, starting with a key question: should be the choice of the intervention strategy to adopt based on the evaluation of the model prediction based on its ability to represent the reality or on the evaluation of what actually will happen on the basis of the information given by the model forecast? Once the previous idea is made unambiguous, the other main concern of this work is to develope a tool that can provide an effective decision support, making possible doing objective and realistic risk evaluations. In particular, such tool should be able to provide an uncertainty assessment as accurate as possible. This means primarily three things: it must be able to correctly combine all the available deterministic forecasts, it must assess the probability distribution of the predicted quantity and it must quantify the flooding probability. Furthermore, given that the time to implement prevention strategies is often limited, the flooding probability will have to be linked to the time of occurrence. For this reason, it is necessary to quantify the flooding probability within a horizon time related to that required to implement the intervention strategy and it is also necessary to assess the probability of the flooding time.
Sviluppo di un sistema miniaturizzato per il controllo real-time di assetto di nano e microsatelliti
Resumo:
Microsatelliti e nanosatelliti, come ad esempio i Cubesat, sono carenti di sistemi integrati di controllo d’assetto e di manovra orbitale. Lo scopo di questa tesi è stato quello di realizzare un sistema compatibile con Cubesat di una unità, completo di attuatori magnetici e attuatori meccanici, comprendente tutti i sensori e l’elettronica necessaria per il suo funzionamento, creando un dispositivo totalmente indipendente dal veicolo su cui è installato, capace di funzionare sia autonomamente che ricevendo comandi da terra. Nella tesi sono descritte le campagne di simulazioni numeriche effettuate per validare le scelte tecnologiche effettuate, le fasi di sviluppo dell’elettronica e della meccanica, i test sui prototipi realizzati e il funzionamento del sistema finale. Una integrazione così estrema dei componenti può implicare delle interferenze tra un dispositivo e l’altro, come nel caso dei magnetotorquer e dei magnetometri. Sono stati quindi studiati e valutati gli effetti della loro interazione, verificandone l’entità e la validità del progetto. Poiché i componenti utilizzati sono tutti di basso costo e di derivazione terrestre, è stata effettuata una breve introduzione teorica agli effetti dell’ambiente spaziale sull’elettronica, per poi descrivere un sistema fault-tolerant basato su nuove teorie costruttive. Questo sistema è stato realizzato e testato, verificando così la possibilità di realizzare un controller affidabile e resistente all’ambiente spaziale per il sistema di controllo d’assetto. Sono state infine analizzate alcune possibili versioni avanzate del sistema, delineandone i principali aspetti progettuali, come ad esempio l’integrazione di GPS e l’implementazione di funzioni di determinazione d’assetto sfruttando i sensori presenti a bordo.
Resumo:
Cost, performance and availability considerations are forcing even the most conservative high-integrity embedded real-time systems industry to migrate from simple hardware processors to ones equipped with caches and other acceleration features. This migration disrupts the practices and solutions that industry had developed and consolidated over the years to perform timing analysis. Industry that are confident with the efficiency/effectiveness of their verification and validation processes for old-generation processors, do not have sufficient insight on the effects of the migration to cache-equipped processors. Caches are perceived as an additional source of complexity, which has potential for shattering the guarantees of cost- and schedule-constrained qualification of their systems. The current industrial approach to timing analysis is ill-equipped to cope with the variability incurred by caches. Conversely, the application of advanced WCET analysis techniques on real-world industrial software, developed without analysability in mind, is hardly feasible. We propose a development approach aimed at minimising the cache jitters, as well as at enabling the application of advanced WCET analysis techniques to industrial systems. Our approach builds on:(i) identification of those software constructs that may impede or complicate timing analysis in industrial-scale systems; (ii) elaboration of practical means, under the model-driven engineering (MDE) paradigm, to enforce the automated generation of software that is analyzable by construction; (iii) implementation of a layout optimisation method to remove cache jitters stemming from the software layout in memory, with the intent of facilitating incremental software development, which is of high strategic interest to industry. The integration of those constituents in a structured approach to timing analysis achieves two interesting properties: the resulting software is analysable from the earliest releases onwards - as opposed to becoming so only when the system is final - and more easily amenable to advanced timing analysis by construction, regardless of the system scale and complexity.
Resumo:
The new generation of multicore processors opens new perspectives for the design of embedded systems. Multiprocessing, however, poses new challenges to the scheduling of real-time applications, in which the ever-increasing computational demands are constantly flanked by the need of meeting critical time constraints. Many research works have contributed to this field introducing new advanced scheduling algorithms. However, despite many of these works have solidly demonstrated their effectiveness, the actual support for multiprocessor real-time scheduling offered by current operating systems is still very limited. This dissertation deals with implementative aspects of real-time schedulers in modern embedded multiprocessor systems. The first contribution is represented by an open-source scheduling framework, which is capable of realizing complex multiprocessor scheduling policies, such as G-EDF, on conventional operating systems exploiting only their native scheduler from user-space. A set of experimental evaluations compare the proposed solution to other research projects that pursue the same goals by means of kernel modifications, highlighting comparable scheduling performances. The principles that underpin the operation of the framework, originally designed for symmetric multiprocessors, have been further extended first to asymmetric ones, which are subjected to major restrictions such as the lack of support for task migrations, and later to re-programmable hardware architectures (FPGAs). In the latter case, this work introduces a scheduling accelerator, which offloads most of the scheduling operations to the hardware and exhibits extremely low scheduling jitter. The realization of a portable scheduling framework presented many interesting software challenges. One of these has been represented by timekeeping. In this regard, a further contribution is represented by a novel data structure, called addressable binary heap (ABH). Such ABH, which is conceptually a pointer-based implementation of a binary heap, shows very interesting average and worst-case performances when addressing the problem of tick-less timekeeping of high-resolution timers.
Resumo:
Environmental computer models are deterministic models devoted to predict several environmental phenomena such as air pollution or meteorological events. Numerical model output is given in terms of averages over grid cells, usually at high spatial and temporal resolution. However, these outputs are often biased with unknown calibration and not equipped with any information about the associated uncertainty. Conversely, data collected at monitoring stations is more accurate since they essentially provide the true levels. Due the leading role played by numerical models, it now important to compare model output with observations. Statistical methods developed to combine numerical model output and station data are usually referred to as data fusion. In this work, we first combine ozone monitoring data with ozone predictions from the Eta-CMAQ air quality model in order to forecast real-time current 8-hour average ozone level defined as the average of the previous four hours, current hour, and predictions for the next three hours. We propose a Bayesian downscaler model based on first differences with a flexible coefficient structure and an efficient computational strategy to fit model parameters. Model validation for the eastern United States shows consequential improvement of our fully inferential approach compared with the current real-time forecasting system. Furthermore, we consider the introduction of temperature data from a weather forecast model into the downscaler, showing improved real-time ozone predictions. Finally, we introduce a hierarchical model to obtain spatially varying uncertainty associated with numerical model output. We show how we can learn about such uncertainty through suitable stochastic data fusion modeling using some external validation data. We illustrate our Bayesian model by providing the uncertainty map associated with a temperature output over the northeastern United States.
Resumo:
This thesis is a collection of works focused on the topic of Earthquake Early Warning, with a special attention to large magnitude events. The topic is addressed from different points of view and the structure of the thesis reflects the variety of the aspects which have been analyzed. The first part is dedicated to the giant, 2011 Tohoku-Oki earthquake. The main features of the rupture process are first discussed. The earthquake is then used as a case study to test the feasibility Early Warning methodologies for very large events. Limitations of the standard approaches for large events arise in this chapter. The difficulties are related to the real-time magnitude estimate from the first few seconds of recorded signal. An evolutionary strategy for the real-time magnitude estimate is proposed and applied to the single Tohoku-Oki earthquake. In the second part of the thesis a larger number of earthquakes is analyzed, including small, moderate and large events. Starting from the measurement of two Early Warning parameters, the behavior of small and large earthquakes in the initial portion of recorded signals is investigated. The aim is to understand whether small and large earthquakes can be distinguished from the initial stage of their rupture process. A physical model and a plausible interpretation to justify the observations are proposed. The third part of the thesis is focused on practical, real-time approaches for the rapid identification of the potentially damaged zone during a seismic event. Two different approaches for the rapid prediction of the damage area are proposed and tested. The first one is a threshold-based method which uses traditional seismic data. Then an innovative approach using continuous, GPS data is explored. Both strategies improve the prediction of large scale effects of strong earthquakes.
Resumo:
The development of High-Integrity Real-Time Systems has a high footprint in terms of human, material and schedule costs. Factoring functional, reusable logic in the application favors incremental development and contains costs. Yet, achieving incrementality in the timing behavior is a much harder problem. Complex features at all levels of the execution stack, aimed to boost average-case performance, exhibit timing behavior highly dependent on execution history, which wrecks time composability and incrementaility with it. Our goal here is to restitute time composability to the execution stack, working bottom up across it. We first characterize time composability without making assumptions on the system architecture or the software deployment to it. Later, we focus on the role played by the real-time operating system in our pursuit. Initially we consider single-core processors and, becoming less permissive on the admissible hardware features, we devise solutions that restore a convincing degree of time composability. To show what can be done for real, we developed TiCOS, an ARINC-compliant kernel, and re-designed ORK+, a kernel for Ada Ravenscar runtimes. In that work, we added support for limited-preemption to ORK+, an absolute premiere in the landscape of real-word kernels. Our implementation allows resource sharing to co-exist with limited-preemptive scheduling, which extends state of the art. We then turn our attention to multicore architectures, first considering partitioned systems, for which we achieve results close to those obtained for single-core processors. Subsequently, we shy away from the over-provision of those systems and consider less restrictive uses of homogeneous multiprocessors, where the scheduling algorithm is key to high schedulable utilization. To that end we single out RUN, a promising baseline, and extend it to SPRINT, which supports sporadic task sets, hence matches real-world industrial needs better. To corroborate our results we present findings from real-world case studies from avionic industry.
Resumo:
In recent decades, Organic Thin Film Transistors (OTFTs) have attracted lots of interest due to their low cost, large area and flexible properties which have brought them to be considered the building blocks of the future organic electronics. Experimentally, devices based on the same organic material deposited in different ways, i.e. by varying the deposition rate of the molecules, show different electrical performance. As predicted theoretically, this is due to the speed and rate by which charge carriers can be transported by hopping in organic thin films, transport that depends on the molecular arrangement of the molecules. This strongly suggests a correlation between the morphology of the organic semiconductor and the performance of the OTFT and hence motivated us to carry out an in-situ real time SPM study of organic semiconductor growth as an almost unprecedent experiment with the aim to fully describe the morphological evolution of the ultra-thin film and find the relevant morphological parameters affecting the OTFT electrical response. For the case of 6T on silicon oxide, we have shown that the growth mechanism is 2D+3D, with a roughening transition at the third layer and a rapid roughening. Relevant morphological parameters have been extracted by the AFM images. We also developed an original mathematical model to estimate theoretically and more accurately than before, the capacitance of an EFM tip in front of a metallic substrate. Finally, we obtained Ultra High Vacuum (UHV) AFM images of 6T at lying molecules layer both on silicon oxide and on top of 6T islands. Moreover, we performed ex-situ AFM imaging on a bilayer film composed of pentacene (a p-type semiconductor) and C60 (an n-type semiconductor).