990 resultados para Extracting information


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Deoxyribonucleic acid (DNA) extraction has considerably evolved since it was initially performed back in 1869. It is the first step required for many of the available downstream applications used in the field of molecular biology. Whole blood samples are one of the main sources used to obtain DNA, and there are many different protocols available to perform nucleic acid extraction on such samples. These methods vary from very basic manual protocols to more sophisticated methods included in automated DNA extraction protocols. Based on the wide range of available options, it would be ideal to determine the ones that perform best in terms of cost-effectiveness and time efficiency. We have reviewed DNA extraction history and the most commonly used methods for DNA extraction from whole blood samples, highlighting their individual advantages and disadvantages. We also searched current scientific literature to find studies comparing different nucleic acid extraction methods, to determine the best available choice. Based on our research, we have determined that there is not enough scientific evidence to support one particular DNA extraction method from whole blood samples. Choosing a suitable method is still a process that requires consideration of many different factors, and more research is needed to validate choices made at facilities around the world.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Health Information Exchange (HIE) is an interesting phenomenon. It is a patient centric health and/or medical information management scenario enhanced by integration of Information and Communication Technologies (ICT). While health information systems are repositioning complex system directives, in the wake of the ‘big data’ paradigm, extracting quality information is challenging. It is anticipated that in this talk, ICT enabled healthcare scenarios with big data analytics will be shared. In addition, research and development regarding big data analytics, such as current trends of using these technologies for health care services and critical research challenges when extracting quality of information to improve quality of life will be discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a methodology to extract legal norms from regulatory documents for their formalisation and later compliance checking. The need for the methodology is motivated from the shortcomings of existing approaches where the rule type and process aspects relevant to the rules are largely overlook. The methodology incorporates the well–known IF. . . THEN structure extended with the process aspect and rule type, and guides how to properly extract the conditions and logical structure of the legal rules for reasoning and modelling of obligations for compliance checking.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The longevity of seed in the soil is a key determinant of the cost and length of weed eradication programs. Soil seed bank information and ongoing research have input into the planning and reporting of two nationally cost shared weed eradication programs based in tropical north Queensland. These eradication programs are targeting serious weeds such as Chromoleana odorata, Mikania micrantha, Miconia calvescens, Clidemia hirta and Limnocharis flava. Various methods are available for estimating soil seed persistence. Field methods to estimate the total and germinable soil seed densities include seed packet burial trials, extracting seed from field soil samples, germinating seed in field soil samples and observations from native range seed bank studies. Interrogating field control records can also indicate the length of the control and monitoring periods needed to exhaust the seed bank. Recently, laboratory tests which rapidly age seed have provided an additional indicator of relative seed persistence. Each method has its advantages, drawbacks and logistical constraints.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Quantum ensembles form easily accessible architectures for studying various phenomena in quantum physics, quantum information science and spectroscopy. Here we review some recent protocols for measurements in quantum ensembles by utilizing ancillary systems. We also illustrate these protocols experimentally via nuclear magnetic resonance techniques. In particular, we shall review noninvasive measurements, extracting expectation values of various operators, characterizations of quantum states and quantum processes, and finally quantum noise engineering.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Understanding the guiding principles of sensory coding strategies is a main goal in computational neuroscience. Among others, the principles of predictive coding and slowness appear to capture aspects of sensory processing. Predictive coding postulates that sensory systems are adapted to the structure of their input signals such that information about future inputs is encoded. Slow feature analysis (SFA) is a method for extracting slowly varying components from quickly varying input signals, thereby learning temporally invariant features. Here, we use the information bottleneck method to state an information-theoretic objective function for temporally local predictive coding. We then show that the linear case of SFA can be interpreted as a variant of predictive coding that maximizes the mutual information between the current output of the system and the input signal in the next time step. This demonstrates that the slowness principle and predictive coding are intimately related.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most research on technology roadmapping has focused on its practical applications and the development of methods to enhance its operational process. Thus, despite a demand for well-supported, systematic information, little attention has been paid to how/which information can be utilised in technology roadmapping. Therefore, this paper aims at proposing a methodology to structure technological information in order to facilitate the process. To this end, eight methods are suggested to provide useful information for technology roadmapping: summary, information extraction, clustering, mapping, navigation, linking, indicators and comparison. This research identifies the characteristics of significant data that can potentially be used in roadmapping, and presents an approach to extracting important information from such raw data through various data mining techniques including text mining, multi-dimensional scaling and K-means clustering. In addition, this paper explains how this approach can be applied in each step of roadmapping. The proposed approach is applied to develop a roadmap of radio-frequency identification (RFID) technology to illustrate the process practically. © 2013 © 2013 Taylor & Francis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Information visualization can accelerate perception, provide insight and control, and harness this flood of valuable data to gain a competitive advantage in making business decisions. Although such a statement seems to be obvious, there is a lack in the literature of practical evidence of the benefit of information visualization. The main contribution of this paper is to illustrate how, for a major European apparel retailer, the visualization of performance information plays a critical role in improving business decisions and in extracting insights from Redio Frequency Idetification (RFID)-based performance measures. In this paper, we identify - based on a literature review - three fundamental managerial functions of information visualization, namely as: a communication medium, a knowledge management means, and a decision-support instrument. Then, we provide - based on real industrial case evidence - how information visualization supports business decision-making. Several examples are provided to evidence the benefit of information visualization through its three identified managerial functions. We find that - depending on the way performance information is shaped, communicated, and made interactive - it not only helps decision making, but also offers a means of knowledge creation, as well as an appropriate communication channel. © 2014 World Scientific Publishing Company.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We proposed a novel methodology, which firstly, extracting features from species' complete genome data, using k-tuple, followed by studying the evolutionary relationship between SARS-CoV and other coronavirus species using the method, called "High-dimensional information geometry". We also used the mothod, namely "caculating of Minimum Spanning Tree", to construct the Phyligenetic tree of the coronavirus. From construction of the unrooted phylogenetic tree, we found out that the evolution distance between SARS-CoV and other coronavirus species is comparatively far. The tree accurately rebuilt the three groups of other coronavirus. We also validated the assertion from other literatures that SARS-CoV is similar to the coronavirus species in Group I.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Localization is essential feature for many mobile wireless applications. Data collected from applications such as environmental monitoring, package tracking or position tracking has no meaning without knowing the location of this data. Other applications have location information as a building block for example, geographic routing protocols, data dissemination protocols and location-based services such as sensing coverage. Many of the techniques have the trade-off among many features such as deployment of special hardware, level of accuracy and computation power. In this paper, we present an algorithm that extracts location constraints from the connectivity information. Our solution, which does not require any special hardware and a small number of landmark nodes, uses two types of location constraints. The spatial constraints derive the estimated locations observing which nodes are within communication range of each other. The temporal constraints refine the areas, computed by the spatial constraints, using properties of time and space extracted from a contact trace. The intuition of the temporal constraints is to limit the possible locations that a node can be using its previous and future locations. To quantify this intuitive improvement in refine the nodes estimated areas adding temporal information, we performed simulations using synthetic and real contact traces. The results show this improvement and also the difficulties of using real traces.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this paper is to demonstrate a technique to utilize underground mine drift profile data for estimating absolute roughness of an underground mine drift in order to implement the Darcy-Weisbach equation for mine ventilation calculations. This technique could provide mine ventilation engineers with more accurate information upon which they might base their ventilation systems designs. This paper presents preliminary work suggesting that it is possible to estimate the absolute roughness of drift-like tunnels by analyzing profile data (e.g., collected using a scanning laser rangefinder). The absolute roughness is then used to estimate the friction factor employed in the Darcy-Weisbach equation. The presented technique is based on an analysis of the spectral characteristics of profile ranges. Simulations based on real mine data are provided to illustrate the potential viability of this method. It is shown that mining drift roughness profiles appear similar to Gaussian profiles

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When multiple sources provide information about the same unknown quantity, their fusion into a synthetic interpretable message is often a tedious problem, especially when sources are conicting. In this paper, we propose to use possibility theory and the notion of maximal coherent subsets, often used in logic-based representations, to build a fuzzy belief structure that will be instrumental both for extracting useful information about various features of the information conveyed by the sources and for compressing this information into a unique possibility distribution. Extensions and properties of the basic fusion rule are also studied.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Web databases are now pervasive. Such a database can be accessed via its query interface (usually HTML query form) only. Extracting Web query interfaces is a critical step in data integration across multiple Web databases, which creates a formal representation of a query form by extracting a set of query conditions in it. This paper presents a novel approach to extracting Web query interfaces. In this approach, a generic set of query condition rules are created to define query conditions that are semantically equivalent to SQL search conditions. Query condition rules represent the semantic roles that labels and form elements play in query conditions, and how they are hierarchically grouped into constructs of query conditions. To group labels and form elements in a query form, we explore both their structural proximity in the hierarchy of structures in the query form, which is captured by a tree of nested tags in the HTML codes of the form, and their semantic similarity, which is captured by various short texts used in labels, form elements and their properties. We have implemented the proposed approach and our experimental results show that the approach is highly effective.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the recent past, hardly anyone could predict this course of GIS development. GIS is moving from desktop to cloud. Web 2.0 enabled people to input data into web. These data are becoming increasingly geolocated. Big amounts of data formed something that is called "Big Data". Scientists still don't know how to deal with it completely. Different Data Mining tools are used for trying to extract some useful information from this Big Data. In our study, we also deal with one part of these data - User Generated Geographic Content (UGGC). The Panoramio initiative allows people to upload photos and describe them with tags. These photos are geolocated, which means that they have exact location on the Earth's surface according to a certain spatial reference system. By using Data Mining tools, we are trying to answer if it is possible to extract land use information from Panoramio photo tags. Also, we tried to answer to what extent this information could be accurate. At the end, we compared different Data Mining methods in order to distinguish which one has the most suited performances for this kind of data, which is text. Our answers are quite encouraging. With more than 70% of accuracy, we proved that extracting land use information is possible to some extent. Also, we found Memory Based Reasoning (MBR) method the most suitable method for this kind of data in all cases.