886 resultados para semi-autonomous information retrieval
Resumo:
Report for the scientific sojourn at the Swiss Federal Institute of Technology Zurich, Switzerland, between September and December 2007. In order to make robots useful assistants for our everyday life, the ability to learn and recognize objects is of essential importance. However, object recognition in real scenes is one of the most challenging problems in computer vision, as it is necessary to deal with difficulties. Furthermore, in mobile robotics a new challenge is added to the list: computational complexity. In a dynamic world, information about the objects in the scene can become obsolete before it is ready to be used if the detection algorithm is not fast enough. Two recent object recognition techniques have achieved notable results: the constellation approach proposed by Lowe and the bag of words approach proposed by Nistér and Stewénius. The Lowe constellation approach is the one currently being used in the robot localization project of the COGNIRON project. This report is divided in two main sections. The first section is devoted to briefly review the currently used object recognition system, the Lowe approach, and bring to light the drawbacks found for object recognition in the context of indoor mobile robot navigation. Additionally the proposed improvements for the algorithm are described. In the second section the alternative bag of words method is reviewed, as well as several experiments conducted to evaluate its performance with our own object databases. Furthermore, some modifications to the original algorithm to make it suitable for object detection in unsegmented images are proposed.
Resumo:
BACKGROUND: DNA sequence integrity, mRNA concentrations and protein-DNA interactions have been subject to genome-wide analyses based on microarrays with ever increasing efficiency and reliability over the past fifteen years. However, very recently novel technologies for Ultra High-Throughput DNA Sequencing (UHTS) have been harnessed to study these phenomena with unprecedented precision. As a consequence, the extensive bioinformatics environment available for array data management, analysis, interpretation and publication must be extended to include these novel sequencing data types. DESCRIPTION: MIMAS was originally conceived as a simple, convenient and local Microarray Information Management and Annotation System focused on GeneChips for expression profiling studies. MIMAS 3.0 enables users to manage data from high-density oligonucleotide SNP Chips, expression arrays (both 3'UTR and tiling) and promoter arrays, BeadArrays as well as UHTS data using MIAME-compliant standardized vocabulary. Importantly, researchers can export data in MAGE-TAB format and upload them to the EBI's ArrayExpress certified data repository using a one-step procedure. CONCLUSION: We have vastly extended the capability of the system such that it processes the data output of six types of GeneChips (Affymetrix), two different BeadArrays for mRNA and miRNA (Illumina) and the Genome Analyzer (a popular Ultra-High Throughput DNA Sequencer, Illumina), without compromising on its flexibility and user-friendliness. MIMAS, appropriately renamed into Multiomics Information Management and Annotation System, is currently used by scientists working in approximately 50 academic laboratories and genomics platforms in Switzerland and France. MIMAS 3.0 is freely available via http://multiomics.sourceforge.net/.
Resumo:
Internet is increasingly used as a source of information on health issues and is probably a major source of patients' empowerment. This process is however limited by the frequently poor quality of web-based health information designed for consumers. A better diffusion of information about criteria defining the quality of the content of websites, and about useful methods designed for searching such needed information, could be particularly useful to patients and their relatives. A brief, six-items DISCERN version, characterized by a high specificity for detecting websites with good or very good content quality was recently developed. This tool could facilitate the identification of high-quality information on the web by patients and may improve the empowerment process initiated by the development of the health-related web.
Resumo:
This paper presents a hybrid behavior-based scheme using reinforcement learning for high-level control of autonomous underwater vehicles (AUVs). Two main features of the presented approach are hybrid behavior coordination and semi on-line neural-Q_learning (SONQL). Hybrid behavior coordination takes advantages of robustness and modularity in the competitive approach as well as efficient trajectories in the cooperative approach. SONQL, a new continuous approach of the Q_learning algorithm with a multilayer neural network is used to learn behavior state/action mapping online. Experimental results show the feasibility of the presented approach for AUVs
Resumo:
The absolute necessity of obtaining 3D information of structured and unknown environments in autonomous navigation reduce considerably the set of sensors that can be used. The necessity to know, at each time, the position of the mobile robot with respect to the scene is indispensable. Furthermore, this information must be obtained in the least computing time. Stereo vision is an attractive and widely used method, but, it is rather limited to make fast 3D surface maps, due to the correspondence problem. The spatial and temporal correspondence among images can be alleviated using a method based on structured light. This relationship can be directly found codifying the projected light; then each imaged region of the projected pattern carries the needed information to solve the correspondence problem. We present the most significant techniques, used in recent years, concerning the coded structured light method
Resumo:
Shape complexity has recently received attention from different fields, such as computer vision and psychology. In this paper, integral geometry and information theory tools are applied to quantify the shape complexity from two different perspectives: from the inside of the object, we evaluate its degree of structure or correlation between its surfaces (inner complexity), and from the outside, we compute its degree of interaction with the circumscribing sphere (outer complexity). Our shape complexity measures are based on the following two facts: uniformly distributed global lines crossing an object define a continuous information channel and the continuous mutual information of this channel is independent of the object discretisation and invariant to translations, rotations, and changes of scale. The measures introduced in this paper can be potentially used as shape descriptors for object recognition, image retrieval, object localisation, tumour analysis, and protein docking, among others
Resumo:
Uncertainty quantification of petroleum reservoir models is one of the present challenges, which is usually approached with a wide range of geostatistical tools linked with statistical optimisation or/and inference algorithms. Recent advances in machine learning offer a novel approach to model spatial distribution of petrophysical properties in complex reservoirs alternative to geostatistics. The approach is based of semisupervised learning, which handles both ?labelled? observed data and ?unlabelled? data, which have no measured value but describe prior knowledge and other relevant data in forms of manifolds in the input space where the modelled property is continuous. Proposed semi-supervised Support Vector Regression (SVR) model has demonstrated its capability to represent realistic geological features and describe stochastic variability and non-uniqueness of spatial properties. On the other hand, it is able to capture and preserve key spatial dependencies such as connectivity of high permeability geo-bodies, which is often difficult in contemporary petroleum reservoir studies. Semi-supervised SVR as a data driven algorithm is designed to integrate various kind of conditioning information and learn dependences from it. The semi-supervised SVR model is able to balance signal/noise levels and control the prior belief in available data. In this work, stochastic semi-supervised SVR geomodel is integrated into Bayesian framework to quantify uncertainty of reservoir production with multiple models fitted to past dynamic observations (production history). Multiple history matched models are obtained using stochastic sampling and/or MCMC-based inference algorithms, which evaluate posterior probability distribution. Uncertainty of the model is described by posterior probability of the model parameters that represent key geological properties: spatial correlation size, continuity strength, smoothness/variability of spatial property distribution. The developed approach is illustrated with a fluvial reservoir case. The resulting probabilistic production forecasts are described by uncertainty envelopes. The paper compares the performance of the models with different combinations of unknown parameters and discusses sensitivity issues.
Resumo:
The purpose of this paper is to describe the collaboration between librarians and scholars, from a virtual university, in order to facilitate collaborative learning on how to manage information resources. The personal information behaviour of e-learning students when managing information resources for academic, professional and daily life purposes was studied from 24 semi-structured face-to-face interviews. The results of the content analysis of the interview' transcriptions, highlighted that in the workplace and daily life contexts, competent information behaviour is always linked to a proactive attitude, that is to say, that participants seek for information without some extrinsic reward or avoiding punishment. In the academic context, it was observed a low level of information literacy and it seems to be related with a prevalent uninvolved attitude.
Resumo:
Abstract Since its creation, the Internet has permeated our daily life. The web is omnipresent for communication, research and organization. This exploitation has resulted in the rapid development of the Internet. Nowadays, the Internet is the biggest container of resources. Information databases such as Wikipedia, Dmoz and the open data available on the net are a great informational potentiality for mankind. The easy and free web access is one of the major feature characterizing the Internet culture. Ten years earlier, the web was completely dominated by English. Today, the web community is no longer only English speaking but it is becoming a genuinely multilingual community. The availability of content is intertwined with the availability of logical organizations (ontologies) for which multilinguality plays a fundamental role. In this work we introduce a very high-level logical organization fully based on semiotic assumptions. We thus present the theoretical foundations as well as the ontology itself, named Linguistic Meta-Model. The most important feature of Linguistic Meta-Model is its ability to support the representation of different knowledge sources developed according to different underlying semiotic theories. This is possible because mast knowledge representation schemata, either formal or informal, can be put into the context of the so-called semiotic triangle. In order to show the main characteristics of Linguistic Meta-Model from a practical paint of view, we developed VIKI (Virtual Intelligence for Knowledge Induction). VIKI is a work-in-progress system aiming at exploiting the Linguistic Meta-Model structure for knowledge expansion. It is a modular system in which each module accomplishes a natural language processing task, from terminology extraction to knowledge retrieval. VIKI is a supporting system to Linguistic Meta-Model and its main task is to give some empirical evidence regarding the use of Linguistic Meta-Model without claiming to be thorough.
Resumo:
In order to investigate the spatial and temporal variability (daily, seasonal and inter-annual) of CO2 and O2 air-sea fluxes and their underlying processes, a dense network of observations is required. For this purpose, the Cape Verde Ocean Observatory (CVOO) provides a unique infrastructure. Information thus obtained also links biological productivity and atmospheric composition. To expand these capabilities, a novel “virtual mooring” approach for high resolution measurements, based on a modified NEMO profiling float, is pursued. This Profiling Float was equipped with O2 and pCO2 sensors for the first time, in order to collect daily depth profiles (0-200 m) in the vicinity of the ocean site. Data access and remote control is provided through Iridium satellite telemetry. Recalibrations and redeployments are carried out every 1-3 month. First, we present the new developed instrument and the innovative in situ and real-time approach behind. Second, we show the inter-disciplinary scientific objectives which will benefit from this approach as a result of the intensive partnership between IFM-GEOMAR and INDP during the last years.
Resumo:
We formulate an evolutionary learning process in the spirit ofYoung (1993a) for games of incomplete information. The process involves trembles. For many games, if the amount of trembling is small, play will be in accordance with the games' (semi-strict) Bayesian equilibria most of the time. This supports the notion of Bayesian equilibrium. Further, often play will most of the time be in accordance with exactly one Bayesian equilibrium. This gives a selection among the Bayesian equilibria. For two specific games of economic interest wecharacterize this selection. The first is an extension to incomplete information of the prototype strategic conflict known as Chicken. The second is an incomplete information bilateral monopoly, which is also an extension to incompleteinformation of Nash's demand game, or a simple version ofthe so-called sealed bid double auction. For both gamesselection by evolutionary learning is in favor of Bayesianequilibria where some types of players fail to coordinate, such that the outcome is inefficient.
Resumo:
Technological developments in the information society bring new challenges, both to the applicability and to the enforceability of the law. One major challenge is posed by new entities such as pseudonyms, avatars, and software agents that operate at an increasing distance from the physical persons "behind" them (the "principal"). In case of accidents or misbehavior, current laws require that the physical or legal principal behind the entity be found so that she can be held to account. This may be problematic if the linkability of the principal and the operating entity is questionable. In light of the ongoing developments in electronic agents, there is sufficient reason to conduct a review of the literature in order to more closely examine arguments for and against legal personhood for some nonhuman acting entities. This article also includes a discussion of alternative approaches to solving the "accountability gap."
Resumo:
This thesis examines how oversight bodies, as part of an ATI policy, contribute to the achievement of the policy's objectives. The aim of the thesis is to see how oversight bodies and the work they do affects the implementation of their respective ATI policies and thereby contributes to the objectives of those policies using a comparative case study approach. The thesis investigates how federal/central government level information commissioners in four jurisdictions - Germany, India, Scotland, and Switzerland - enforce their respective ATI policies, which tasks they carry out in addition to their enforcement duties, the challenges they face in their work and the ways they overcome these. Qualitative data were gathered from primary and secondary documents as well as in 37 semi-structured interviews with staff of the commissioners' offices, administrative officials whose job entails complying with ATI, people who have made ATI requests and appealed to their respective oversight body, and external experts who have studied ATI implementation in their particular jurisdiction. The thesis finds that while the aspect of an oversight body's formal independence that has the greatest impact on its work is resource control and that although the powers granted by law set the framework for ensuring that the administration is properly complying with the policy, the commissioner's leadership style - a component of informal independence - has more influence than formal attributes of independence in setting out how resources are obtained and used as well as how staff set priorities and utilize the powers they are granted by law. The conclusion, therefore, is that an ATI oversight body's ability to contribute to the achievement of the policy's objectives is a function of three main factors: a. commissioner's leadership style; b. adequacy of resources and degree of control the organization has over them; c. powers and the exercise of discretion in using them. In effect, the thesis argues that it is difficult to pinpoint the value of the formal powers set out for the oversight body in the ATI law, and that their decisions on whether and how to use them are more important than the presumed strength of the powers. It also claims that the choices made by the commissioners and their staff regarding priorities and use of powers are determined to a large extent by the adequacy of resources and the degree of control the organization has over those resources. In turn, how the head of the organization leads and manages the oversight body is crucial to both the adequacy of the organization's resources and the decisions made about the use of powers. Together, these three factors have a significant impact on the body's effectiveness in contributing to ATI objectives.
Resumo:
Background: Conventional magnetic resonance imaging (MRI) techniques are highly sensitive to detect multiple sclerosis (MS) plaques, enabling a quantitative assessment of inflammatory activity and lesion load. In quantitative analyses of focal lesions, manual or semi-automated segmentations have been widely used to compute the total number of lesions and the total lesion volume. These techniques, however, are both challenging and time-consuming, being also prone to intra-observer and inter-observer variability.Aim: To develop an automated approach to segment brain tissues and MS lesions from brain MRI images. The goal is to reduce the user interaction and to provide an objective tool that eliminates the inter- and intra-observer variability.Methods: Based on the recent methods developed by Souplet et al. and de Boer et al., we propose a novel pipeline which includes the following steps: bias correction, skull stripping, atlas registration, tissue classification, and lesion segmentation. After the initial pre-processing steps, a MRI scan is automatically segmented into 4 classes: white matter (WM), grey matter (GM), cerebrospinal fluid (CSF) and partial volume. An expectation maximisation method which fits a multivariate Gaussian mixture model to T1-w, T2-w and PD-w images is used for this purpose. Based on the obtained tissue masks and using the estimated GM mean and variance, we apply an intensity threshold to the FLAIR image, which provides the lesion segmentation. With the aim of improving this initial result, spatial information coming from the neighbouring tissue labels is used to refine the final lesion segmentation.Results:The experimental evaluation was performed using real data sets of 1.5T and the corresponding ground truth annotations provided by expert radiologists. The following values were obtained: 64% of true positive (TP) fraction, 80% of false positive (FP) fraction, and an average surface distance of 7.89 mm. The results of our approach were quantitatively compared to our implementations of the works of Souplet et al. and de Boer et al., obtaining higher TP and lower FP values.Conclusion: Promising MS lesion segmentation results have been obtained in terms of TP. However, the high number of FP which is still a well-known problem of all the automated MS lesion segmentation approaches has to be improved in order to use them for the standard clinical practice. Our future work will focus on tackling this issue.