909 resultados para Automatic Inference


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose an extension of the approach provided by Kluppelberg and Kuhn (2009) for inference on second-order structure moments. As in Kluppelberg and Kuhn (2009) we adopt a copula-based approach instead of assuming normal distribution for the variables, thus relaxing the equality in distribution assumption. A new copula-based estimator for structure moments is investigated. The methodology provided by Kluppelberg and Kuhn (2009) is also extended considering the copulas associated with the family of Eyraud-Farlie-Gumbel-Morgenstern distribution functions (Kotz, Balakrishnan, and Johnson, 2000, Equation 44.73). Finally, a comprehensive simulation study and an application to real financial data are performed in order to compare the different approaches.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this treatise we consider finite systems of branching particles where the particles move independently of each other according to d-dimensional diffusions. Particles are killed at a position dependent rate, leaving at their death position a random number of descendants according to a position dependent reproduction law. In addition particles immigrate at constant rate (one immigrant per immigration time). A process with above properties is called a branching diffusion withimmigration (BDI). In the first part we present the model in detail and discuss the properties of the BDI under our basic assumptions. In the second part we consider the problem of reconstruction of the trajectory of a BDI from discrete observations. We observe positions of the particles at discrete times; in particular we assume that we have no information about the pedigree of the particles. A natural question arises if we want to apply statistical procedures on the discrete observations: How can we find couples of particle positions which belong to the same particle? We give an easy to implement 'reconstruction scheme' which allows us to redraw or 'reconstruct' parts of the trajectory of the BDI with high accuracy. Moreover asymptotically the whole path can be reconstructed. Further we present simulations which show that our partial reconstruction rule is tractable in practice. In the third part we study how the partial reconstruction rule fits into statistical applications. As an extensive example we present a nonparametric estimator for the diffusion coefficient of a BDI where the particles move according to one-dimensional diffusions. This estimator is based on the Nadaraya-Watson estimator for the diffusion coefficient of one-dimensional diffusions and it uses the partial reconstruction rule developed in the second part above. We are able to prove a rate of convergence of this estimator and finally we present simulations which show that the estimator works well even if we leave our set of assumptions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

La tesi ha lo scopo di esplorare la produzione di sistemi software per Embedded Systems mediante l'utilizzo di tecniche relative al mondo del Model Driven Software Development. La fase più importante dello sviluppo sarà la definizione di un Meta-Modello che caratterizza i concetti fondamentali relativi agli embedded systems. Tale modello cercherà di astrarre dalla particolare piattaforma utilizzata ed individuare quali astrazioni caratterizzano il mondo degli embedded systems in generale. Tale meta-modello sarà quindi di tipo platform-independent. Per la generazione automatica di codice è stata adottata una piattaforma di riferimento, cioè Arduino. Arduino è un sistema embedded che si sta sempre più affermando perché coniuga un buon livello di performance ed un prezzo relativamente basso. Tale piattaforma permette lo sviluppo di sistemi special purpose che utilizzano sensori ed attuatori di vario genere, facilmente connessi ai pin messi a disposizione. Il meta-modello definito è un'istanza del meta-metamodello MOF, definito formalmente dall'organizzazione OMG. Questo permette allo sviluppatore di pensare ad un sistema sotto forma di modello, istanza del meta-modello definito. Un meta-modello può essere considerato anche come la sintassi astratta di un linguaggio, quindi può essere definito da un insieme di regole EBNF. La tecnologia utilizzata per la definizione del meta-modello è stata Xtext: un framework che permette la scrittura di regole EBNF e che genera automaticamente il modello Ecore associato al meta-modello definito. Ecore è l'implementazione di EMOF in ambiente Eclipse. Xtext genera inoltre dei plugin che permettono di avere un editor guidato dalla sintassi, definita nel meta-modello. La generazione automatica di codice è stata realizzata usando il linguaggio Xtend2. Tale linguaggio permette di esplorare l'Abstract Syntax Tree generato dalla traduzione del modello in Ecore e di generare tutti i file di codice necessari. Il codice generato fornisce praticamente tutta la schematic part dell'applicazione, mentre lascia all'application designer lo sviluppo della business logic. Dopo la definizione del meta-modello di un sistema embedded, il livello di astrazione è stato spostato più in alto, andando verso la definizione della parte di meta-modello relativa all'interazione di un sistema embedded con altri sistemi. Ci si è quindi spostati verso un ottica di Sistema, inteso come insieme di sistemi concentrati che interagiscono. Tale difinizione viene fatta dal punto di vista del sistema concentrato di cui si sta definendo il modello. Nella tesi viene inoltre introdotto un caso di studio che, anche se abbastanza semplice, fornisce un esempio ed un tutorial allo sviluppo di applicazioni mediante l'uso del meta-modello. Ci permette inoltre di notare come il compito dell'application designer diventi piuttosto semplice ed immediato, sempre se basato su una buona analisi del problema. I risultati ottenuti sono stati di buona qualità ed il meta-modello viene tradotto in codice che funziona correttamente.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The central objective of research in Information Retrieval (IR) is to discover new techniques to retrieve relevant information in order to satisfy an Information Need. The Information Need is satisfied when relevant information can be provided to the user. In IR, relevance is a fundamental concept which has changed over time, from popular to personal, i.e., what was considered relevant before was information for the whole population, but what is considered relevant now is specific information for each user. Hence, there is a need to connect the behavior of the system to the condition of a particular person and his social context; thereby an interdisciplinary sector called Human-Centered Computing was born. For the modern search engine, the information extracted for the individual user is crucial. According to the Personalized Search (PS), two different techniques are necessary to personalize a search: contextualization (interconnected conditions that occur in an activity), and individualization (characteristics that distinguish an individual). This movement of focus to the individual's need undermines the rigid linearity of the classical model overtaken the ``berry picking'' model which explains that the terms change thanks to the informational feedback received from the search activity introducing the concept of evolution of search terms. The development of Information Foraging theory, which observed the correlations between animal foraging and human information foraging, also contributed to this transformation through attempts to optimize the cost-benefit ratio. This thesis arose from the need to satisfy human individuality when searching for information, and it develops a synergistic collaboration between the frontiers of technological innovation and the recent advances in IR. The search method developed exploits what is relevant for the user by changing radically the way in which an Information Need is expressed, because now it is expressed through the generation of the query and its own context. As a matter of fact the method was born under the pretense to improve the quality of search by rewriting the query based on the contexts automatically generated from a local knowledge base. Furthermore, the idea of optimizing each IR system has led to develop it as a middleware of interaction between the user and the IR system. Thereby the system has just two possible actions: rewriting the query, and reordering the result. Equivalent actions to the approach was described from the PS that generally exploits information derived from analysis of user behavior, while the proposed approach exploits knowledge provided by the user. The thesis went further to generate a novel method for an assessment procedure, according to the "Cranfield paradigm", in order to evaluate this type of IR systems. The results achieved are interesting considering both the effectiveness achieved and the innovative approach undertaken together with the several applications inspired using a local knowledge base.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The identification of people by measuring some traits of individual anatomy or physiology has led to a specific research area called biometric recognition. This thesis is focused on improving fingerprint recognition systems considering three important problems: fingerprint enhancement, fingerprint orientation extraction and automatic evaluation of fingerprint algorithms. An effective extraction of salient fingerprint features depends on the quality of the input fingerprint. If the fingerprint is very noisy, we are not able to detect a reliable set of features. A new fingerprint enhancement method, which is both iterative and contextual, is proposed. This approach detects high-quality regions in fingerprints, selectively applies contextual filtering and iteratively expands like wildfire toward low-quality ones. A precise estimation of the orientation field would greatly simplify the estimation of other fingerprint features (singular points, minutiae) and improve the performance of a fingerprint recognition system. The fingerprint orientation extraction is improved following two directions. First, after the introduction of a new taxonomy of fingerprint orientation extraction methods, several variants of baseline methods are implemented and, pointing out the role of pre- and post- processing, we show how to improve the extraction. Second, the introduction of a new hybrid orientation extraction method, which follows an adaptive scheme, allows to improve significantly the orientation extraction in noisy fingerprints. Scientific papers typically propose recognition systems that integrate many modules and therefore an automatic evaluation of fingerprint algorithms is needed to isolate the contributions that determine an actual progress in the state-of-the-art. The lack of a publicly available framework to compare fingerprint orientation extraction algorithms, motivates the introduction of a new benchmark area called FOE (including fingerprints and manually-marked orientation ground-truth) along with fingerprint matching benchmarks in the FVC-onGoing framework. The success of such framework is discussed by providing relevant statistics: more than 1450 algorithms submitted and two international competitions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis we made the first steps towards the systematic application of a methodology for automatically building formal models of complex biological systems. Such a methodology could be useful also to design artificial systems possessing desirable properties such as robustness and evolvability. The approach we follow in this thesis is to manipulate formal models by means of adaptive search methods called metaheuristics. In the first part of the thesis we develop state-of-the-art hybrid metaheuristic algorithms to tackle two important problems in genomics, namely, the Haplotype Inference by parsimony and the Founder Sequence Reconstruction Problem. We compare our algorithms with other effective techniques in the literature, we show strength and limitations of our approaches to various problem formulations and, finally, we propose further enhancements that could possibly improve the performance of our algorithms and widen their applicability. In the second part, we concentrate on Boolean network (BN) models of gene regulatory networks (GRNs). We detail our automatic design methodology and apply it to four use cases which correspond to different design criteria and address some limitations of GRN modeling by BNs. Finally, we tackle the Density Classification Problem with the aim of showing the learning capabilities of BNs. Experimental evaluation of this methodology shows its efficacy in producing network that meet our design criteria. Our results, coherently to what has been found in other works, also suggest that networks manipulated by a search process exhibit a mixture of characteristics typical of different dynamical regimes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis proposes design methods and test tools, for optical systems, which may be used in an industrial environment, where not only precision and reliability but also ease of use is important. The approach to the problem has been conceived to be as general as possible, although in the present work, the design of a portable device for automatic identification applications has been studied, because this doctorate has been funded by Datalogic Scanning Group s.r.l., a world-class producer of barcode readers. The main functional components of the complete device are: electro-optical imaging, illumination and pattern generator systems. For what concerns the electro-optical imaging system, a characterization tool and an analysis one has been developed to check if the desired performance of the system has been achieved. Moreover, two design tools for optimizing the imaging system have been implemented. The first optimizes just the core of the system, the optical part, improving its performance ignoring all other contributions and generating a good starting point for the optimization of the whole complex system. The second tool optimizes the system taking into account its behavior with a model as near as possible to reality including optics, electronics and detection. For what concerns the illumination and the pattern generator systems, two tools have been implemented. The first allows the design of free-form lenses described by an arbitrary analytical function exited by an incoherent source and is able to provide custom illumination conditions for all kind of applications. The second tool consists of a new method to design Diffractive Optical Elements excited by a coherent source for large pattern angles using the Iterative Fourier Transform Algorithm. Validation of the design tools has been obtained, whenever possible, comparing the performance of the designed systems with those of fabricated prototypes. In other cases simulations have been used.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In distributed systems like clouds or service oriented frameworks, applications are typically assembled by deploying and connecting a large number of heterogeneous software components, spanning from fine-grained packages to coarse-grained complex services. The complexity of such systems requires a rich set of techniques and tools to support the automation of their deployment process. By relying on a formal model of components, a technique is devised for computing the sequence of actions allowing the deployment of a desired configuration. An efficient algorithm, working in polynomial time, is described and proven to be sound and complete. Finally, a prototype tool implementing the proposed algorithm has been developed. Experimental results support the adoption of this novel approach in real life scenarios.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis concerns artificially intelligent natural language processing systems that are capable of learning the properties of lexical items (properties like verbal valency or inflectional class membership) autonomously while they are fulfilling their tasks for which they have been deployed in the first place. Many of these tasks require a deep analysis of language input, which can be characterized as a mapping of utterances in a given input C to a set S of linguistically motivated structures with the help of linguistic information encoded in a grammar G and a lexicon L: G + L + C → S (1) The idea that underlies intelligent lexical acquisition systems is to modify this schematic formula in such a way that the system is able to exploit the information encoded in S to create a new, improved version of the lexicon: G + L + S → L' (2) Moreover, the thesis claims that a system can only be considered intelligent if it does not just make maximum usage of the learning opportunities in C, but if it is also able to revise falsely acquired lexical knowledge. So, one of the central elements in this work is the formulation of a couple of criteria for intelligent lexical acquisition systems subsumed under one paradigm: the Learn-Alpha design rule. The thesis describes the design and quality of a prototype for such a system, whose acquisition components have been developed from scratch and built on top of one of the state-of-the-art Head-driven Phrase Structure Grammar (HPSG) processing systems. The quality of this prototype is investigated in a series of experiments, in which the system is fed with extracts of a large English corpus. While the idea of using machine-readable language input to automatically acquire lexical knowledge is not new, we are not aware of a system that fulfills Learn-Alpha and is able to deal with large corpora. To instance four major challenges of constructing such a system, it should be mentioned that a) the high number of possible structural descriptions caused by highly underspeci ed lexical entries demands for a parser with a very effective ambiguity management system, b) the automatic construction of concise lexical entries out of a bulk of observed lexical facts requires a special technique of data alignment, c) the reliability of these entries depends on the system's decision on whether it has seen 'enough' input and d) general properties of language might render some lexical features indeterminable if the system tries to acquire them with a too high precision. The cornerstone of this dissertation is the motivation and development of a general theory of automatic lexical acquisition that is applicable to every language and independent of any particular theory of grammar or lexicon. This work is divided into five chapters. The introductory chapter first contrasts three different and mutually incompatible approaches to (artificial) lexical acquisition: cue-based queries, head-lexicalized probabilistic context free grammars and learning by unification. Then the postulation of the Learn-Alpha design rule is presented. The second chapter outlines the theory that underlies Learn-Alpha and exposes all the related notions and concepts required for a proper understanding of artificial lexical acquisition. Chapter 3 develops the prototyped acquisition method, called ANALYZE-LEARN-REDUCE, a framework which implements Learn-Alpha. The fourth chapter presents the design and results of a bootstrapping experiment conducted on this prototype: lexeme detection, learning of verbal valency, categorization into nominal count/mass classes, selection of prepositions and sentential complements, among others. The thesis concludes with a review of the conclusions and motivation for further improvements as well as proposals for future research on the automatic induction of lexical features.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

People tend to automatically mimic facial expressions of others. If clear evidence exists on the effect of non-verbal behavior (emotion faces) on automatic facial mimicry, little is known about the role of verbal behavior (emotion language) in triggering such effects. Whereas it is well-established that political affiliation modulates facial mimicry, no evidence exists on whether this modulation passes also through verbal means. This research addressed the role of verbal behavior in triggering automatic facial effects depending on whether verbal stimuli are attributed to leaders of different political parties. Study 1 investigated the role of interpersonal verbs, referring to positive and negative emotion expressions and encoding them at different levels of abstraction, in triggering corresponding facial muscle activation in a reader. Study 2 examined the role of verbs expressing positive and negative emotional behaviors of political leaders in modulating automatic facial effects depending on the matched or mismatched political affiliation of participants and politicians of left-and right-wing. Study 3 examined whether verbs expressing happiness displays of ingroup politicians induce a more sincere smile (Duchenne) pattern among readers of same political affiliation relative to happiness expressions of outgroup politicians. Results showed that verbs encoding facial actions at different levels of abstraction elicited differential facial muscle activity (Study 1). Furthermore, political affiliation significantly modulated facial activation triggered by emotion verbs as participants showed more congruent and enhanced facial activity towards ingroup politicians’ smiles and frowns compared to those of outgroup politicians (Study 2). Participants facially responded with a more sincere smile pattern towards verbs expressing smiles of ingroup compared to outgroup politicians (Study 3). Altogether, results showed that the role of political affiliation in modulating automatic facial effects passes also through verbal channels and is revealed at a fine-grained level by inducing quantitative and qualitative differences in automatic facial reactions of readers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dysfunction of Autonomic Nervous System (ANS) is a typical feature of chronic heart failure and other cardiovascular disease. As a simple non-invasive technology, heart rate variability (HRV) analysis provides reliable information on autonomic modulation of heart rate. The aim of this thesis was to research and develop automatic methods based on ANS assessment for evaluation of risk in cardiac patients. Several features selection and machine learning algorithms have been combined to achieve the goals. Automatic assessment of disease severity in Congestive Heart Failure (CHF) patients: a completely automatic method, based on long-term HRV was proposed in order to automatically assess the severity of CHF, achieving a sensitivity rate of 93% and a specificity rate of 64% in discriminating severe versus mild patients. Automatic identification of hypertensive patients at high risk of vascular events: a completely automatic system was proposed in order to identify hypertensive patients at higher risk to develop vascular events in the 12 months following the electrocardiographic recordings, achieving a sensitivity rate of 71% and a specificity rate of 86% in identifying high-risk subjects among hypertensive patients. Automatic identification of hypertensive patients with history of fall: it was explored whether an automatic identification of fallers among hypertensive patients based on HRV was feasible. The results obtained in this thesis could have implications both in clinical practice and in clinical research. The system has been designed and developed in order to be clinically feasible. Moreover, since 5-minute ECG recording is inexpensive, easy to assess, and non-invasive, future research will focus on the clinical applicability of the system as a screening tool in non-specialized ambulatories, in order to identify high-risk patients to be shortlisted for more complex investigations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A method for automatic scaling of oblique ionograms has been introduced. This method also provides a rejection procedure for ionograms that are considered to lack sufficient information, depicting a very good success rate. Observing the Kp index of each autoscaled ionogram, can be noticed that the behavior of the autoscaling program does not depend on geomagnetic conditions. The comparison between the values of the MUF provided by the presented software and those obtained by an experienced operator indicate that the procedure developed for detecting the nose of oblique ionogram traces is sufficiently efficient and becomes much more efficient as the quality of the ionograms improves. These results demonstrate the program allows the real-time evaluation of MUF values associated with a particular radio link through an oblique radio sounding. The automatic recognition of a part of the trace allows determine for certain frequencies, the time taken by the radio wave to travel the path between the transmitter and receiver. The reconstruction of the ionogram traces, suggests the possibility of estimating the electron density between the transmitter and the receiver, from an oblique ionogram. The showed results have been obtained with a ray-tracing procedure based on the integration of the eikonal equation and using an analytical ionospheric model with free parameters. This indicates the possibility of applying an adaptive model and a ray-tracing algorithm to estimate the electron density in the ionosphere between the transmitter and the receiver An additional study has been conducted on a high quality ionospheric soundings data set and another algorithm has been designed for the conversion of an oblique ionogram into a vertical one, using Martyn's theorem. This allows a further analysis of oblique soundings, throw the use of the INGV Autoscala program for the automatic scaling of vertical ionograms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the most serious problems of the modern medicine is the growing emergence of antibiotic resistance among pathogenic bacteria. In this circumstance, different and innovative approaches for treating infections caused by multidrug-resistant bacteria are imperatively required. Bacteriophage Therapy is one among the fascinating approaches to be taken into account. This consists of the use of bacteriophages, viruses that infect bacteria, in order to defeat specific bacterial pathogens. Phage therapy is not an innovative idea, indeed, it was widely used around the world in the 1930s and 1940s, in order to treat various infection diseases, and it is still used in Eastern Europe and the former Soviet Union. Nevertheless, Western scientists mostly lost interest in further use and study of phage therapy and abandoned it after the discovery and the spread of antibiotics. The advancement of scientific knowledge of the last years, together with the encouraging results from recent animal studies using phages to treat bacterial infections, and above all the urgent need for novel and effective antimicrobials, have given a prompt for additional rigorous researches in this field. In particular, in the laboratory of synthetic biology of the department of Life Sciences at the University of Warwick, a novel approach was adopted, starting from the original concept of phage therapy, in order to study a concrete alternative to antibiotics. The innovative idea of the project consists in the development of experimental methodologies, which allow to engineer a programmable synthetic phage system using a combination of directed evolution, automation and microfluidics. The main aim is to make “the therapeutics of tomorrow individualized, specific, and self-regulated” (Jaramillo, 2015). In this context, one of the most important key points is the Bacteriophage Quantification. Therefore, in this research work, a mathematical model describing complex dynamics occurring in biological systems involving continuous growth of bacteriophages, modulated by the performance of the host organisms, was implemented as algorithms into a working software using MATLAB. The developed program is able to predict different unknown concentrations of phages much faster than the classical overnight Plaque Assay. What is more, it gives a meaning and an explanation to the obtained data, making inference about the parameter set of the model, that are representative of the bacteriophage-host interaction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study focused on the role of oceanographic discontinuities and the presence of transitional areas in shaping the population structure and the phylogeography of the Raja miraletus species complex, coupled with the test of the effective occurrence of past speciation events. The comparisons between the Atlantic African and the North-Eastern Atlantic-Mediterranean geographic populations were unravelled using both Cytochrome Oxidase I and eight microsatellite loci. This approach guaranteed a robust dataset for the identification of a speciation event between the Atlantic African clade, corresponding to the ex Raja ocellifera nominal species, and the NE Atlantic-Mediterranean R. miraletus clade. As a matter of fact, the origin of the Atlantic Africa and the NE Atlantic-Mediterranean deep split dated about 11.74MYA and was likely due to the synergic influence currents and two upwelling areas crossing the Western African Waters. Within the Mediterranean Sea, particular attention was also paid to the transitional area represented by Adventura and Maltese Bank, that might have contributed in sustaining the connectivity of the Western and the Eastern Mediterranean geographical populations. Furthermore, the geology of the easternmost part of Sicily and the geo-morphological depression of the Calabrian Arc could have driven the differentiation of the Eastern Mediterranean Sea. Although bathymetric and oceanographic discontinuity could represent barriers to dispersal and migration between Eastern and Western Mediterranean samples, a clear and complete genetic separation among them was not detected. Results produced by this work identified a speciation event defining Raja ocellifera and R. miraletus as two different species, and describing the R. miraletus species complex as the most ancient cryptic speciation event in the family Rajidae, representing another example of how strictly connected the environment, the behavioural habits and the evolutionary and ecologic drivers are.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Real living cell is a complex system governed by many process which are not yet fully understood: the process of cell differentiation is one of these. In this thesis work we make use of a cell differentiation model to develop gene regulatory networks (Boolean networks) with desired differentiation dynamics. To accomplish this task we have introduced techniques of automatic design and we have performed experiments using various differentiation trees. The results obtained have shown that the developed algorithms, except the Random algorithm, are able to generate Boolean networks with interesting differentiation dynamics. Moreover, we have presented some possible future applications and developments of the cell differentiation model in robotics and in medical research. Understanding the mechanisms involved in biological cells can gives us the possibility to explain some not yet understood dangerous disease, i.e the cancer. Le cellula è un sistema complesso governato da molti processi ancora non pienamente compresi: il differenziamento cellulare è uno di questi. In questa tesi utilizziamo un modello di differenziamento cellulare per sviluppare reti di regolazione genica (reti Booleane) con dinamiche di differenziamento desiderate. Per svolgere questo compito abbiamo introdotto tecniche di progettazione automatica e abbiamo eseguito esperimenti utilizzando vari alberi di differenziamento. I risultati ottenuti hanno mostrato che gli algoritmi sviluppati, eccetto l'algoritmo Random, sono in grado di poter generare reti Booleane con dinamiche di differenziamento interessanti. Inoltre, abbiamo presentato alcune possibili applicazioni e sviluppi futuri del modello di differenziamento in robotica e nella ricerca medica. Capire i meccanismi alla base del funzionamento cellulare può fornirci la possibilità di spiegare patologie ancora oggi non comprese, come il cancro.