12 resultados para Elements, High Trhoughput Data, elettrofisiologia, elaborazione dati, analisi Real Time
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
Environmental computer models are deterministic models devoted to predict several environmental phenomena such as air pollution or meteorological events. Numerical model output is given in terms of averages over grid cells, usually at high spatial and temporal resolution. However, these outputs are often biased with unknown calibration and not equipped with any information about the associated uncertainty. Conversely, data collected at monitoring stations is more accurate since they essentially provide the true levels. Due the leading role played by numerical models, it now important to compare model output with observations. Statistical methods developed to combine numerical model output and station data are usually referred to as data fusion. In this work, we first combine ozone monitoring data with ozone predictions from the Eta-CMAQ air quality model in order to forecast real-time current 8-hour average ozone level defined as the average of the previous four hours, current hour, and predictions for the next three hours. We propose a Bayesian downscaler model based on first differences with a flexible coefficient structure and an efficient computational strategy to fit model parameters. Model validation for the eastern United States shows consequential improvement of our fully inferential approach compared with the current real-time forecasting system. Furthermore, we consider the introduction of temperature data from a weather forecast model into the downscaler, showing improved real-time ozone predictions. Finally, we introduce a hierarchical model to obtain spatially varying uncertainty associated with numerical model output. We show how we can learn about such uncertainty through suitable stochastic data fusion modeling using some external validation data. We illustrate our Bayesian model by providing the uncertainty map associated with a temperature output over the northeastern United States.
Resumo:
Cost, performance and availability considerations are forcing even the most conservative high-integrity embedded real-time systems industry to migrate from simple hardware processors to ones equipped with caches and other acceleration features. This migration disrupts the practices and solutions that industry had developed and consolidated over the years to perform timing analysis. Industry that are confident with the efficiency/effectiveness of their verification and validation processes for old-generation processors, do not have sufficient insight on the effects of the migration to cache-equipped processors. Caches are perceived as an additional source of complexity, which has potential for shattering the guarantees of cost- and schedule-constrained qualification of their systems. The current industrial approach to timing analysis is ill-equipped to cope with the variability incurred by caches. Conversely, the application of advanced WCET analysis techniques on real-world industrial software, developed without analysability in mind, is hardly feasible. We propose a development approach aimed at minimising the cache jitters, as well as at enabling the application of advanced WCET analysis techniques to industrial systems. Our approach builds on:(i) identification of those software constructs that may impede or complicate timing analysis in industrial-scale systems; (ii) elaboration of practical means, under the model-driven engineering (MDE) paradigm, to enforce the automated generation of software that is analyzable by construction; (iii) implementation of a layout optimisation method to remove cache jitters stemming from the software layout in memory, with the intent of facilitating incremental software development, which is of high strategic interest to industry. The integration of those constituents in a structured approach to timing analysis achieves two interesting properties: the resulting software is analysable from the earliest releases onwards - as opposed to becoming so only when the system is final - and more easily amenable to advanced timing analysis by construction, regardless of the system scale and complexity.
Resumo:
This thesis is a collection of works focused on the topic of Earthquake Early Warning, with a special attention to large magnitude events. The topic is addressed from different points of view and the structure of the thesis reflects the variety of the aspects which have been analyzed. The first part is dedicated to the giant, 2011 Tohoku-Oki earthquake. The main features of the rupture process are first discussed. The earthquake is then used as a case study to test the feasibility Early Warning methodologies for very large events. Limitations of the standard approaches for large events arise in this chapter. The difficulties are related to the real-time magnitude estimate from the first few seconds of recorded signal. An evolutionary strategy for the real-time magnitude estimate is proposed and applied to the single Tohoku-Oki earthquake. In the second part of the thesis a larger number of earthquakes is analyzed, including small, moderate and large events. Starting from the measurement of two Early Warning parameters, the behavior of small and large earthquakes in the initial portion of recorded signals is investigated. The aim is to understand whether small and large earthquakes can be distinguished from the initial stage of their rupture process. A physical model and a plausible interpretation to justify the observations are proposed. The third part of the thesis is focused on practical, real-time approaches for the rapid identification of the potentially damaged zone during a seismic event. Two different approaches for the rapid prediction of the damage area are proposed and tested. The first one is a threshold-based method which uses traditional seismic data. Then an innovative approach using continuous, GPS data is explored. Both strategies improve the prediction of large scale effects of strong earthquakes.
Resumo:
The development of High-Integrity Real-Time Systems has a high footprint in terms of human, material and schedule costs. Factoring functional, reusable logic in the application favors incremental development and contains costs. Yet, achieving incrementality in the timing behavior is a much harder problem. Complex features at all levels of the execution stack, aimed to boost average-case performance, exhibit timing behavior highly dependent on execution history, which wrecks time composability and incrementaility with it. Our goal here is to restitute time composability to the execution stack, working bottom up across it. We first characterize time composability without making assumptions on the system architecture or the software deployment to it. Later, we focus on the role played by the real-time operating system in our pursuit. Initially we consider single-core processors and, becoming less permissive on the admissible hardware features, we devise solutions that restore a convincing degree of time composability. To show what can be done for real, we developed TiCOS, an ARINC-compliant kernel, and re-designed ORK+, a kernel for Ada Ravenscar runtimes. In that work, we added support for limited-preemption to ORK+, an absolute premiere in the landscape of real-word kernels. Our implementation allows resource sharing to co-exist with limited-preemptive scheduling, which extends state of the art. We then turn our attention to multicore architectures, first considering partitioned systems, for which we achieve results close to those obtained for single-core processors. Subsequently, we shy away from the over-provision of those systems and consider less restrictive uses of homogeneous multiprocessors, where the scheduling algorithm is key to high schedulable utilization. To that end we single out RUN, a promising baseline, and extend it to SPRINT, which supports sporadic task sets, hence matches real-world industrial needs better. To corroborate our results we present findings from real-world case studies from avionic industry.
Resumo:
Autism is a neurodevelpmental disorder characterized by impaired verbal communication, limited reciprocal social interaction, restricted interests and repetitive behaviours. Twin and family studies indicate a large genetic contribution to ASDs (Autism Spectrum Disorders). During my Ph.D. I have been involved in several projects in which I used different genetic approaches in order to identify susceptibility genes in autism on chromosomes 2, 7 and X: 1)High-density SNP association and CNV analysis of two Autism Susceptibility Loci. The International Molecular Genetic Study of Autism Consortium (IMGSAC) previously identified linkage loci on chromosomes 7 and 2, termed AUTS1 and AUTS5, respectively. In this study, we evaluated the patterns of linkage disequilibrium (LD) and the distribution of haplotype blocks, utilising data from the HapMap project, across the two strongest peaks of linkage on chromosome 2 and 7. More than 3000 SNPs have been selected in each locus in all known genes, as well as SNPs in non-genic highly conserved sequences. All markers have been genotyped to perform a high-density association analysis and to explore copy number variation within these regions. The study sample consisted of 127 and 126 multiplex families, showing linkage to the AUTS1 and AUTS5 regions, respectively, and 188 gender-matched controls. Association and CNV analysis implicated several new genes, including IMMP2L and DOCK4 on chromosome 7 and ZNF533 and NOSTRIN on the chromosome 2. Particularly, my contribution to this project focused on the characterization of the best candidate gene in each locus: On the AUTS5 locus I carried out a transcript study of ZNF533 in different human tissues to verify which isoforms and start exons were expressed. High transcript variability and a new exon, never described before, has been identified in this analysis. Furthermore, I selected 31 probands for the risk haplotype and performed a mutation screen of all known exons in order to identify novel coding variants associated to autism. On the AUTS1 locus a duplication was detected in one multiplex family that was transmitted from father to an affected son. This duplication interrupts two genes: IMMP2L and DOCK4 and warranted further analysis. Thus, I performed a screening of the cohort of IMGSAC collection (285 multiplex families), using a QMPSF assay (Quantitative Multiplex PCR of Short fluorescent Fragments) to analyse if CNVs in this genic region segregate with autism phenotype and compare their frequency with a sample of 475 UK controls. Evidence for a role of DOCK4 in autism susceptibility was supported by independent replication of association at rs2217262 and the finding of a deletion segregating in a sib-pair family. 2)Analysis of X chromosome inactivation. Skewed X chromosome inactivation (XCI) is observed in females carrying gene mutations involved in several X-linked syndromes. We aimed to estimate the role of X-linked genes in ASD susceptibility by ascertaining the XCI pattern in a sample of 543 informative mothers of children with ASD and in a sample of 164 affected girls. The study sample included families from different european consortia. I analysed the XCI inactivation pattern in a sample of italian mothers from singletons families with ASD and also a control groups (144 adult females and 40 young females). We observed no significant excess of skewed XCI in families with ASD. Interestingly, two mothers and one girl carrying known mutations in X-linked genes (NLGN3, ATRX, MECP2) showed highly skewed XCI, suggesting that ascertainment of XCI could reveal families with X-linked mutations. Linkage analysis was carried out in the subgroup of multiplex families with skewed XCI (≥80:20) and a modest increased allele sharing was obtained in the Xq27-Xq28 region, with a peak Z score of 1.75 close to rs719489. In this region FMR1 and MECP2 have been associated in some cases with austim and therefore represent candidates for the disorder. I performed a mutation screen of MECP2 in 33 unrelated probands from IMGSAC and italian families, showing XCI skewness. Recently, Xq28 duplications including MECP2, have been identified in families with MR, with asymptomatic carrier females showing extreme (>85%) skewing of XCI. For these reason I used the sample of probands from X-skewed families to perform CNV analysis by Real-time quantitative PCR. No duplications have been found in our sample. I have also confirmed all data using as alternative method the MLPA assay (Multiplex Ligation dependent Probe Amplification). 3)ASMT as functional candidate gene for autism. Recently, a possible involvement of the acetylserotonin O-methyltransferase (ASMT) gene in susceptibility to ASDs has been reported: mutation screening of the ASMT gene in 250 individuals from the PARIS collection revealed several rare variants with a likely functional role; Moreover, significant association was reported for two SNPs (rs4446909 and rs5989681) located in one of the two alternative promoters of the gene. To further investigate these findings, I carried out a replication study using a sample of 263 affected individuals from the IMGSAC collection and 390 control individuals. Several rare mutations were identified, including the splice site mutation IVS5+2T>C and the L326F substitution previously reported by Melke et al (2007), but the same rare variants have been found also in control individuals in our study. Interestingly, a new R319X stop mutation was found in a single autism proband of Italian origin and is absent from the entire control sample. Furthermore, no replication has been found in our case-control study typing the SNPs on the ASMT promoter B.
Resumo:
Ultrasound imaging is widely used in medical diagnostics as it is the fastest, least invasive, and least expensive imaging modality. However, ultrasound images are intrinsically difficult to be interpreted. In this scenario, Computer Aided Detection (CAD) systems can be used to support physicians during diagnosis providing them a second opinion. This thesis discusses efficient ultrasound processing techniques for computer aided medical diagnostics, focusing on two major topics: (i) Ultrasound Tissue Characterization (UTC), aimed at characterizing and differentiating between healthy and diseased tissue; (ii) Ultrasound Image Segmentation (UIS), aimed at detecting the boundaries of anatomical structures to automatically measure organ dimensions and compute clinically relevant functional indices. Research on UTC produced a CAD tool for Prostate Cancer detection to improve the biopsy protocol. In particular, this thesis contributes with: (i) the development of a robust classification system; (ii) the exploitation of parallel computing on GPU for real-time performance; (iii) the introduction of both an innovative Semi-Supervised Learning algorithm and a novel supervised/semi-supervised learning scheme for CAD system training that improve system performance reducing data collection effort and avoiding collected data wasting. The tool provides physicians a risk map highlighting suspect tissue areas, allowing them to perform a lesion-directed biopsy. Clinical validation demonstrated the system validity as a diagnostic support tool and its effectiveness at reducing the number of biopsy cores requested for an accurate diagnosis. For UIS the research developed a heart disease diagnostic tool based on Real-Time 3D Echocardiography. Thesis contributions to this application are: (i) the development of an automated GPU based level-set segmentation framework for 3D images; (ii) the application of this framework to the myocardium segmentation. Experimental results showed the high efficiency and flexibility of the proposed framework. Its effectiveness as a tool for quantitative analysis of 3D cardiac morphology and function was demonstrated through clinical validation.
Resumo:
L’attenta analisi della letteratura scientifica su argomenti riguardanti i contaminanti ambientali oggetto di studio (i policlorodifenili) ha permesso di raccogliere dati utili riguardanti le proprietà di queste molecole, la loro diffusione e la loro pericolosità. Oggetto della ricerca è stato lo studio in vitro del potenziale citotossico e trasformante dei PCB, utilizzando come riferimento una miscela commerciale di PCB, l’Aroclor 1260, e di un MIX di 18 congeneri ricostituito in laboratorio. Il lavoro è proseguito con la valutazione degli effetti di queste miscele e di due congeneri singoli (PCB 118 e PCB 153) su linee cellulari diverse in test di vitalità a breve termine. L’utilizzo di test specifici ha poi permesso la valutazione di un possibile potenziale estrogenico. Una volta ottenuto un quadro generale sui possibili effetti delle miscele grazie ai risultati dei test funzionali, è stata valutata la modulazione, da parte delle molecole e/o di miscele delle stesse, dell’espressione di geni coinvolti nella risposta ad estrogeni o a composti diossino simili, andando ad effettuare un’analisi di tipo molecolare con Real-Time PCR (RT-PCR) e analizzando nello specifico marcatori di pathway dell’Aryl Hydrocarbon Receptor (AhR) o dell’Estrogen Receptor (ER). In ultima analisi al fine di verificare l’applicabilità di biomarkers di espressione a situazioni di contaminazioni reali, ci si è focalizzati su campioni estratti da matrici ambientali, ed in particolare linee cellulari di interesse sono state esposte a estratti di sedimenti provenienti da siti inquinati. L’approccio scelto è stato di tipo molecolare, con lo scopo di individuare pathway da valutare in un secondo momento in test funzionali specifici. L’attività di ricerca si è avvalsa della tecnica del DNA-microarray per valutare la modulazione dell’espressione genica in risposta all’esposizione a contaminanti ambientali. In questo modo è possibile definire i profili di espressione genica che sottendono a risposte biologiche complesse nell’intento di individuare biomarcatori in grado di predire il rischio per l’uomo, e di consentire la stima di una relazione diretta tra esposizione ed effetti possibili.
Resumo:
This work presents exact, hybrid algorithms for mixed resource Allocation and Scheduling problems; in general terms, those consist into assigning over time finite capacity resources to a set of precedence connected activities. The proposed methods have broad applicability, but are mainly motivated by applications in the field of Embedded System Design. In particular, high-performance embedded computing recently witnessed the shift from single CPU platforms with application-specific accelerators to programmable Multi Processor Systems-on-Chip (MPSoCs). Those allow higher flexibility, real time performance and low energy consumption, but the programmer must be able to effectively exploit the platform parallelism. This raises interest in the development of algorithmic techniques to be embedded in CAD tools; in particular, given a specific application and platform, the objective if to perform optimal allocation of hardware resources and to compute an execution schedule. On this regard, since embedded systems tend to run the same set of applications for their entire lifetime, off-line, exact optimization approaches are particularly appealing. Quite surprisingly, the use of exact algorithms has not been well investigated so far; this is in part motivated by the complexity of integrated allocation and scheduling, setting tough challenges for ``pure'' combinatorial methods. The use of hybrid CP/OR approaches presents the opportunity to exploit mutual advantages of different methods, while compensating for their weaknesses. In this work, we consider in first instance an Allocation and Scheduling problem over the Cell BE processor by Sony, IBM and Toshiba; we propose three different solution methods, leveraging decomposition, cut generation and heuristic guided search. Next, we face Allocation and Scheduling of so-called Conditional Task Graphs, explicitly accounting for branches with outcome not known at design time; we extend the CP scheduling framework to effectively deal with the introduced stochastic elements. Finally, we address Allocation and Scheduling with uncertain, bounded execution times, via conflict based tree search; we introduce a simple and flexible time model to take into account duration variability and provide an efficient conflict detection method. The proposed approaches achieve good results on practical size problem, thus demonstrating the use of exact approaches for system design is feasible. Furthermore, the developed techniques bring significant contributions to combinatorial optimization methods.
Resumo:
The new generation of multicore processors opens new perspectives for the design of embedded systems. Multiprocessing, however, poses new challenges to the scheduling of real-time applications, in which the ever-increasing computational demands are constantly flanked by the need of meeting critical time constraints. Many research works have contributed to this field introducing new advanced scheduling algorithms. However, despite many of these works have solidly demonstrated their effectiveness, the actual support for multiprocessor real-time scheduling offered by current operating systems is still very limited. This dissertation deals with implementative aspects of real-time schedulers in modern embedded multiprocessor systems. The first contribution is represented by an open-source scheduling framework, which is capable of realizing complex multiprocessor scheduling policies, such as G-EDF, on conventional operating systems exploiting only their native scheduler from user-space. A set of experimental evaluations compare the proposed solution to other research projects that pursue the same goals by means of kernel modifications, highlighting comparable scheduling performances. The principles that underpin the operation of the framework, originally designed for symmetric multiprocessors, have been further extended first to asymmetric ones, which are subjected to major restrictions such as the lack of support for task migrations, and later to re-programmable hardware architectures (FPGAs). In the latter case, this work introduces a scheduling accelerator, which offloads most of the scheduling operations to the hardware and exhibits extremely low scheduling jitter. The realization of a portable scheduling framework presented many interesting software challenges. One of these has been represented by timekeeping. In this regard, a further contribution is represented by a novel data structure, called addressable binary heap (ABH). Such ABH, which is conceptually a pointer-based implementation of a binary heap, shows very interesting average and worst-case performances when addressing the problem of tick-less timekeeping of high-resolution timers.
Resumo:
I moderni motori a combustione interna diventano sempre più complessi L'introduzione della normativa antinquinamento EURO VI richiederà una significativa riduzione degli inquinanti allo scarico. La maggiore criticità è rappresentata dalla riduzione degli NOx per i motori Diesel da aggiungersi a quelle già in vigore con le precedenti normative. Tipicamente la messa a punto di una nuova motorizzazione prevede una serie di test specifici al banco prova. Il numero sempre maggiore di parametri di controllo della combustione, sorti come conseguenza della maggior complessità meccanica del motore stesso, causa un aumento esponenziale delle prove da eseguire per caratterizzare l'intero sistema. L'obiettivo di questo progetto di dottorato è quello di realizzare un sistema di analisi della combustione in tempo reale in cui siano implementati diversi algoritmi non ancora presenti nelle centraline moderne. Tutto questo facendo particolare attenzione alla scelta dell'hardware su cui implementare gli algoritmi di analisi. Creando una piattaforma di Rapid Control Prototyping (RCP) che sfrutti la maggior parte dei sensori presenti in vettura di serie; che sia in grado di abbreviare i tempi e i costi della sperimentazione sui motopropulsori, riducendo la necessità di effettuare analisi a posteriori, su dati precedentemente acquisiti, a fronte di una maggior quantità di calcoli effettuati in tempo reale. La soluzione proposta garantisce l'aggiornabilità, la possibilità di mantenere al massimo livello tecnologico la piattaforma di calcolo, allontanandone l'obsolescenza e i costi di sostituzione. Questa proprietà si traduce nella necessità di mantenere la compatibilità tra hardware e software di generazioni differenti, rendendo possibile la sostituzione di quei componenti che limitano le prestazioni senza riprogettare il software.
Resumo:
La malattia da reflusso gastroesofageo (GERD) si divide in due categorie: malattia non erosiva (NERD) ed erosiva (ERD). Questi due fenotipi di GERD mostrano caratteristiche patofisiologiche e cliniche differenti. NERD è la forma più comune. Anche se ERD e NERD sono difficili da distinguere a livello clinico, la forma NERD possiede caratteristiche fisiologiche, patofisiologiche, anatomiche, e istologiche uniche. La replicazione cellulare dello strato basale si pensa sia una delle cause implicate nella resistenza della mucosa e nella difesa strutturale dell’epitelio. Diversi studi hanno dimostrato che la proliferazione cellulare è ridotta nella mucosa esofagea esposta ad insulti acidi e peptici cronici, in pazienti GERD, in più uno studio recente ha dimostrato che il recettore per i cannabinoidi CB1 era implicato nella riparazione delle ferite nella mucosa del colon. Sulla base di questi dati abbiamo valutato la presenza del recettore CB1 in biopsie della mucosa esofagea, di pazienti ERD, NERD e di controlli sani, tramite analisi Western Blot, Immunoistochimica e Real-Time PCR, dimostrando per la prima volta la presenza di questo recettore nell’epitelio dell’esofago e una riduzione dei suoi livelli di espressione nei pazienti ERD, camparati con i NERD e con i controlli sani. Successivamente, per chiarire meglio i meccanismi molecolari che caratterizzano ERD e NERD, abbiamo effettuato un analisi proteomica con la tecnica shotgun, la quale ha evidenziato un patter proteico di 33 proteine differenzialmente espresse in pazienti NERD vs ERD, sette delle quali confermate in wester Blot, e quattro in immunoistochimica. Concludendo i nostri risultati hanno confermato che ERD e NERD sono due entità distinte a livello proteico, e hanno proposto dei candidati biomarker per la diagnosi differenziale di ERD e NERD.
Resumo:
This thesis investigates interactive scene reconstruction and understanding using RGB-D data only. Indeed, we believe that depth cameras will still be in the near future a cheap and low-power 3D sensing alternative suitable for mobile devices too. Therefore, our contributions build on top of state-of-the-art approaches to achieve advances in three main challenging scenarios, namely mobile mapping, large scale surface reconstruction and semantic modeling. First, we will describe an effective approach dealing with Simultaneous Localization And Mapping (SLAM) on platforms with limited resources, such as a tablet device. Unlike previous methods, dense reconstruction is achieved by reprojection of RGB-D frames, while local consistency is maintained by deploying relative bundle adjustment principles. We will show quantitative results comparing our technique to the state-of-the-art as well as detailed reconstruction of various environments ranging from rooms to small apartments. Then, we will address large scale surface modeling from depth maps exploiting parallel GPU computing. We will develop a real-time camera tracking method based on the popular KinectFusion system and an online surface alignment technique capable of counteracting drift errors and closing small loops. We will show very high quality meshes outperforming existing methods on publicly available datasets as well as on data recorded with our RGB-D camera even in complete darkness. Finally, we will move to our Semantic Bundle Adjustment framework to effectively combine object detection and SLAM in a unified system. Though the mathematical framework we will describe does not restrict to a particular sensing technology, in the experimental section we will refer, again, only to RGB-D sensing. We will discuss successful implementations of our algorithm showing the benefit of a joint object detection, camera tracking and environment mapping.