238 resultados para INFORMATICA
Resumo:
La ricerca analizza la relazione che intercorre tra la linea politica del Pci e il modo in cui tale linea viene interpretata dalla base del partito in Emilia-Romagna negli anni Settanta.
Resumo:
In this thesis we have developed solutions to common issues regarding widefield microscopes, facing the problem of the intensity inhomogeneity of an image and dealing with two strong limitations: the impossibility of acquiring either high detailed images representative of whole samples or deep 3D objects. First, we cope with the problem of the non-uniform distribution of the light signal inside a single image, named vignetting. In particular we proposed, for both light and fluorescent microscopy, non-parametric multi-image based methods, where the vignetting function is estimated directly from the sample without requiring any prior information. After getting flat-field corrected images, we studied how to fix the problem related to the limitation of the field of view of the camera, so to be able to acquire large areas at high magnification. To this purpose, we developed mosaicing techniques capable to work on-line. Starting from a set of overlapping images manually acquired, we validated a fast registration approach to accurately stitch together the images. Finally, we worked to virtually extend the field of view of the camera in the third dimension, with the purpose of reconstructing a single image completely in focus, stemming from objects having a relevant depth or being displaced in different focus planes. After studying the existing approaches for extending the depth of focus of the microscope, we proposed a general method that does not require any prior information. In order to compare the outcome of existing methods, different standard metrics are commonly used in literature. However, no metric is available to compare different methods in real cases. First, we validated a metric able to rank the methods as the Universal Quality Index does, but without needing any reference ground truth. Second, we proved that the approach we developed performs better in both synthetic and real cases.
Resumo:
The cardiomyocyte is a complex biological system where many mechanisms interact non-linearly to regulate the coupling between electrical excitation and mechanical contraction. For this reason, the development of mathematical models is fundamental in the field of cardiac electrophysiology, where the use of computational tools has become complementary to the classical experimentation. My doctoral research has been focusing on the development of such models for investigating the regulation of ventricular excitation-contraction coupling at the single cell level. In particular, the following researches are presented in this thesis: 1) Study of the unexpected deleterious effect of a Na channel blocker on a long QT syndrome type 3 patient. Experimental results were used to tune a Na current model that recapitulates the effect of the mutation and the treatment, in order to investigate how these influence the human action potential. Our research suggested that the analysis of the clinical phenotype is not sufficient for recommending drugs to patients carrying mutations with undefined electrophysiological properties. 2) Development of a model of L-type Ca channel inactivation in rabbit myocytes to faithfully reproduce the relative roles of voltage- and Ca-dependent inactivation. The model was applied to the analysis of Ca current inactivation kinetics during normal and abnormal repolarization, and predicts arrhythmogenic activity when inhibiting Ca-dependent inactivation, which is the predominant mechanism in physiological conditions. 3) Analysis of the arrhythmogenic consequences of the crosstalk between β-adrenergic and Ca-calmodulin dependent protein kinase signaling pathways. The descriptions of the two regulatory mechanisms, both enhanced in heart failure, were integrated into a novel murine action potential model to investigate how they concur to the development of cardiac arrhythmias. These studies show how mathematical modeling is suitable to provide new insights into the mechanisms underlying cardiac excitation-contraction coupling and arrhythmogenesis.
Resumo:
Different types of proteins exist with diverse functions that are essential for living organisms. An important class of proteins is represented by transmembrane proteins which are specifically designed to be inserted into biological membranes and devised to perform very important functions in the cell such as cell communication and active transport across the membrane. Transmembrane β-barrels (TMBBs) are a sub-class of membrane proteins largely under-represented in structure databases because of the extreme difficulty in experimental structure determination. For this reason, computational tools that are able to predict the structure of TMBBs are needed. In this thesis, two computational problems related to TMBBs were addressed: the detection of TMBBs in large datasets of proteins and the prediction of the topology of TMBB proteins. Firstly, a method for TMBB detection was presented based on a novel neural network framework for variable-length sequence classification. The proposed approach was validated on a non-redundant dataset of proteins. Furthermore, we carried-out genome-wide detection using the entire Escherichia coli proteome. In both experiments, the method significantly outperformed other existing state-of-the-art approaches, reaching very high PPV (92%) and MCC (0.82). Secondly, a method was also introduced for TMBB topology prediction. The proposed approach is based on grammatical modelling and probabilistic discriminative models for sequence data labeling. The method was evaluated using a newly generated dataset of 38 TMBB proteins obtained from high-resolution data in the PDB. Results have shown that the model is able to correctly predict topologies of 25 out of 38 protein chains in the dataset. When tested on previously released datasets, the performances of the proposed approach were measured as comparable or superior to the current state-of-the-art of TMBB topology prediction.
Resumo:
This thesis analyses problems related to the applicability, in business environments, of Process Mining tools and techniques. The first contribution is a presentation of the state of the art of Process Mining and a characterization of companies, in terms of their "process awareness". The work continues identifying circumstance where problems can emerge: data preparation; actual mining; and results interpretation. Other problems are the configuration of parameters by not-expert users and computational complexity. We concentrate on two possible scenarios: "batch" and "on-line" Process Mining. Concerning the batch Process Mining, we first investigated the data preparation problem and we proposed a solution for the identification of the "case-ids" whenever this field is not explicitly indicated. After that, we concentrated on problems at mining time and we propose the generalization of a well-known control-flow discovery algorithm in order to exploit non instantaneous events. The usage of interval-based recording leads to an important improvement of performance. Later on, we report our work on the parameters configuration for not-expert users. We present two approaches to select the "best" parameters configuration: one is completely autonomous; the other requires human interaction to navigate a hierarchy of candidate models. Concerning the data interpretation and results evaluation, we propose two metrics: a model-to-model and a model-to-log. Finally, we present an automatic approach for the extension of a control-flow model with social information, in order to simplify the analysis of these perspectives. The second part of this thesis deals with control-flow discovery algorithms in on-line settings. We propose a formal definition of the problem, and two baseline approaches. The actual mining algorithms proposed are two: the first is the adaptation, to the control-flow discovery problem, of a frequency counting algorithm; the second constitutes a framework of models which can be used for different kinds of streams (stationary versus evolving).
Resumo:
Progress in miniaturization of electronic components and design of wireless systems paved the way towards ubiquitous and pervasive communications, enabling anywhere and anytime connectivity. Wireless devices present on, inside, around the human body are becoming commonly used, leading to the class of body-centric communications. The presence of the body with all its peculiar characteristics has to be properly taken into account in the development and design of wireless networks in this context. This thesis addresses various aspects of body-centric communications, with the aim of investigating network performance achievable in different scenarios. The main original contributions pertain to the performance evaluation for Wireless Body Area Networks (WBANs) at the Medium Access Control layer: the application of Link Adaptation to these networks is proposed, Carrier Sense Multiple Access with Collision Avoidance algorithms used for WBAN are extensively investigated, coexistence with other wireless systems is examined. Then, an analytical model for interference in wireless access network is developed, which can be applied to the study of communication between devices located on humans and fixed nodes of an external infrastructure. Finally, results on experimental activities regarding the investigation of human mobility and sociality are presented.
Resumo:
The aim of this thesis is the study of techniques for efficient management and use of the spectrum based on cognitive radio technology. The ability of cognitive radio technologies to adapt to the real-time conditions of its operating environment, offers the potential for more flexible use of the available spectrum. In this context, the international interest is particularly focused on the “white spaces” in the UHF band of digital terrestrial television. Spectrum sensing and geo-location database have been considered in order to obtain information on the electromagnetic environment. Different methodologies have been considered in order to investigate spectral resources potentially available for the white space devices in the TV band. The adopted methodologies are based on the geo-location database approach used either in autonomous operation or in combination with sensing techniques. A novel and computationally efficient methodology for the calculation of the maximum permitted white space device EIRP is then proposed. The methodology is suitable for implementation in TV white space databases. Different Italian scenarios are analyzed in order to identify both the available spectrum and the white space device emission limits. Finally two different applications of cognitive radio technology are considered. The first considered application is the emergency management. The attention is focused on the consideration of both cognitive and autonomic networking approaches when deploying an emergency management system. The cognitive technology is then considered in applications related to satellite systems. In particular a hybrid cognitive satellite-terrestrial is introduced and an analysis of coexistence between terrestrial and satellite networks by considering a cognitive approach is performed.
Resumo:
Perfusion CT imaging of the liver has potential to improve evaluation of tumour angiogenesis. Quantitative parameters can be obtained applying mathematical models to Time Attenuation Curve (TAC). However, there are still some difficulties for an accurate quantification of perfusion parameters due, for example, to algorithms employed, to mathematical model, to patient’s weight and cardiac output and to the acquisition system. In this thesis, new parameters and alternative methodologies about liver perfusion CT are presented in order to investigate the cause of variability of this technique. Firstly analysis were made to assess the variability related to the mathematical model used to compute arterial Blood Flow (BFa) values. Results were obtained implementing algorithms based on “ maximum slope method” and “Dual input one compartment model” . Statistical analysis on simulated data demonstrated that the two methods are not interchangeable. Anyway slope method is always applicable in clinical context. Then variability related to TAC processing in the application of slope method is analyzed. Results compared with manual selection allow to identify the best automatic algorithm to compute BFa. The consistency of a Standardized Perfusion Index (SPV) was evaluated and a simplified calibration procedure was proposed. At the end the quantitative value of perfusion map was analyzed. ROI approach and map approach provide related values of BFa and this means that pixel by pixel algorithm give reliable quantitative results. Also in pixel by pixel approach slope method give better results. In conclusion the development of new automatic algorithms for a consistent computation of BFa and the analysis and definition of simplified technique to compute SPV parameter, represent an improvement in the field of liver perfusion CT analysis.
Resumo:
This work presents exact algorithms for the Resource Allocation and Cyclic Scheduling Problems (RA&CSPs). Cyclic Scheduling Problems arise in a number of application areas, such as in hoist scheduling, mass production, compiler design (implementing scheduling loops on parallel architectures), software pipelining, and in embedded system design. The RA&CS problem concerns time and resource assignment to a set of activities, to be indefinitely repeated, subject to precedence and resource capacity constraints. In this work we present two constraint programming frameworks facing two different types of cyclic problems. In first instance, we consider the disjunctive RA&CSP, where the allocation problem considers unary resources. Instances are described through the Synchronous Data-flow (SDF) Model of Computation. The key problem of finding a maximum-throughput allocation and scheduling of Synchronous Data-Flow graphs onto a multi-core architecture is NP-hard and has been traditionally solved by means of heuristic (incomplete) algorithms. We propose an exact (complete) algorithm for the computation of a maximum-throughput mapping of applications specified as SDFG onto multi-core architectures. Results show that the approach can handle realistic instances in terms of size and complexity. Next, we tackle the Cyclic Resource-Constrained Scheduling Problem (i.e. CRCSP). We propose a Constraint Programming approach based on modular arithmetic: in particular, we introduce a modular precedence constraint and a global cumulative constraint along with their filtering algorithms. Many traditional approaches to cyclic scheduling operate by fixing the period value and then solving a linear problem in a generate-and-test fashion. Conversely, our technique is based on a non-linear model and tackles the problem as a whole: the period value is inferred from the scheduling decisions. The proposed approaches have been tested on a number of non-trivial synthetic instances and on a set of realistic industrial instances achieving good results on practical size problem.
Resumo:
Constructing ontology networks typically occurs at design time at the hands of knowledge engineers who assemble their components statically. There are, however, use cases where ontology networks need to be assembled upon request and processed at runtime, without altering the stored ontologies and without tampering with one another. These are what we call "virtual [ontology] networks", and keeping track of how an ontology changes in each virtual network is called "multiplexing". Issues may arise from the connectivity of ontology networks. In many cases, simple flat import schemes will not work, because many ontology managers can cause property assertions to be erroneously interpreted as annotations and ignored by reasoners. Also, multiple virtual networks should optimize their cumulative memory footprint, and where they cannot, this should occur for very limited periods of time. We claim that these problems should be handled by the software that serves these ontology networks, rather than by ontology engineering methodologies. We propose a method that spreads multiple virtual networks across a 3-tier structure, and can reduce the amount of erroneously interpreted axioms, under certain raw statement distributions across the ontologies. We assumed OWL as the core language handled by semantic applications in the framework at hand, due to the greater availability of reasoners and rule engines. We also verified that, in common OWL ontology management software, OWL axiom interpretation occurs in the worst case scenario of pre-order visit. To measure the effectiveness and space-efficiency of our solution, a Java and RESTful implementation was produced within an Apache project. We verified that a 3-tier structure can accommodate reasonably complex ontology networks better, in terms of the expressivity OWL axiom interpretation, than flat-tree import schemes can. We measured both the memory overhead of the additional components we put on top of traditional ontology networks, and the framework's caching capabilities.
Resumo:
Questa ricerca si pone come obbiettivo lo studio e l’analisi delle diverse manifestazioni del gioco d’azzardo nella Bologna della fine del XIX secolo. Partendo dall’assunto che le forme ludiche non rappresentino solo una pratica, né soltanto un fatto sociale o economico ma sono piuttosto la somma di diverse componenti, ci si è proposti di studiare la storia a partire da uno specifico punto di vista: quello dei giochi e dei giocatori legati alle pratiche del rischio. Tramite le carte d’archivio della polizia e altri documenti dell’epoca si sono indagati i diversi aspetti del fenomeno dell’azzardo: i luoghi in cui essi venivano praticati, le norme che li regolavano, gli strati sociali entro cui venivano giocati, le mutazioni delle condizioni politiche e sociali che portarono al cambiamento delle diverse forme ludiche e delle comunità che lo hanno praticato e contestualmente si è cercato di comprendere come cambino e come permangano la percezione del rischio, l’idea di competizione, la stessa attitudine al gioco.
Resumo:
The thesis applies the ICC tecniques to the probabilistic polinomial complexity classes in order to get an implicit characterization of them. The main contribution lays on the implicit characterization of PP (which stands for Probabilistic Polynomial Time) class, showing a syntactical characterisation of PP and a static complexity analyser able to recognise if an imperative program computes in Probabilistic Polynomial Time. The thesis is divided in two parts. The first part focuses on solving the problem by creating a prototype of functional language (a probabilistic variation of lambda calculus with bounded recursion) that is sound and complete respect to Probabilistic Prolynomial Time. The second part, instead, reverses the problem and develops a feasible way to verify if a program, written with a prototype of imperative programming language, is running in Probabilistic polynomial time or not. This thesis would characterise itself as one of the first step for Implicit Computational Complexity over probabilistic classes. There are still open hard problem to investigate and try to solve. There are a lot of theoretical aspects strongly connected with these topics and I expect that in the future there will be wide attention to ICC and probabilistic classes.
Resumo:
The quest for universal memory is driving the rapid development of memories with superior all-round capabilities in non-volatility, high speed, high endurance and low power. The memory subsystem accounts for a significant cost and power budget of a computer system. Current DRAM-based main memory systems are starting to hit the power and cost limit. To resolve this issue the industry is improving existing technologies such as Flash and exploring new ones. Among those new technologies is the Phase Change Memory (PCM), which overcomes some of the shortcomings of the Flash such as durability and scalability. This alternative non-volatile memory technology, which uses resistance contrast in phase-change materials, offers more density relative to DRAM, and can help to increase main memory capacity of future systems while remaining within the cost and power constraints. Chalcogenide materials can suitably be exploited for manufacturing phase-change memory devices. Charge transport in amorphous chalcogenide-GST used for memory devices is modeled using two contributions: hopping of trapped electrons and motion of band electrons in extended states. Crystalline GST exhibits an almost Ohmic I(V) curve. In contrast amorphous GST shows a high resistance at low biases while, above a threshold voltage, a transition takes place from a highly resistive to a conductive state, characterized by a negative differential-resistance behavior. A clear and complete understanding of the threshold behavior of the amorphous phase is fundamental for exploiting such materials in the fabrication of innovative nonvolatile memories. The type of feedback that produces the snapback phenomenon is described as a filamentation in energy that is controlled by electron–electron interactions between trapped electrons and band electrons. The model thus derived is implemented within a state-of-the-art simulator. An analytical version of the model is also derived and is useful for discussing the snapback behavior and the scaling properties of the device.