932 resultados para Information dispersal algorithm


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article introduces a new interface for T-Coffee, a consistency-based multiple sequence alignment program. This interface provides an easy and intuitive access to the most popular functionality of the package. These include the default T-Coffee mode for protein and nucleic acid sequences, the M-Coffee mode that allows combining the output of any other aligners, and template-based modes of T-Coffee that deliver high accuracy alignments while using structural or homology derived templates. These three available template modes are Expresso for the alignment of protein with a known 3D-Structure, R-Coffee to align RNA sequences with conserved secondary structures and PSI-Coffee to accurately align distantly related sequences using homology extension. The new server benefits from recent improvements of the T-Coffee algorithm and can align up to 150 sequences as long as 10,000 residues and is available from both http://www.tcoffee.org and its main mirror http://tcoffee.crg.cat.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article introduces a new interface for T-Coffee, a consistency-based multiple sequence alignment program. This interface provides an easy and intuitive access to the most popular functionality of the package. These include the default T-Coffee mode for protein and nucleic acid sequences, the M-Coffee mode that allows combining the output of any other aligners, and template-based modes of T-Coffee that deliver high accuracy alignments while using structural or homology derived templates. These three available template modes are Expresso for the alignment of protein with a known 3D-Structure, R-Coffee to align RNA sequences with conserved secondary structures and PSI-Coffee to accurately align distantly related sequences using homology extension. The new server benefits from recent improvements of the T-Coffee algorithm and can align up to 150 sequences as long as 10 000 residues and is available from both http://www.tcoffee.org and its main mirror http://tcoffee.crg.cat.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A Wiener system is a linear time-invariant filter, followed by an invertible nonlinear distortion. Assuming that the input signal is an independent and identically distributed (iid) sequence, we propose an algorithm for estimating the input signal only by observing the output of the Wiener system. The algorithm is based on minimizing the mutual information of the output samples, by means of a steepest descent gradient approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes Question Waves, an algorithm that can be applied to social search protocols, such as Asknext or Sixearch. In this model, the queries are propagated through the social network, with faster propagation through more trustable acquaintances. Question Waves uses local information to make decisions and obtain an answer ranking. With Question Waves, the answers that arrive first are the most likely to be relevant, and we computed the correlation of answer relevance with the order of arrival to demonstrate this result. We obtained correlations equivalent to the heuristics that use global knowledge, such as profile similarity among users or the expertise value of an agent. Because Question Waves is compatible with the social search protocol Asknext, it is possible to stop a search when enough relevant answers have been found; additionally, stopping the search early only introduces a minimal risk of not obtaining the best possible answer. Furthermore, Question Waves does not require a re-ranking algorithm because the results arrive sorted

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a pose-based algorithm to solve the full SLAM problem for an autonomous underwater vehicle (AUV), navigating in an unknown and possibly unstructured environment. The technique incorporate probabilistic scan matching with range scans gathered from a mechanical scanning imaging sonar (MSIS) and the robot dead-reckoning displacements estimated from a Doppler velocity log (DVL) and a motion reference unit (MRU). The proposed method utilizes two extended Kalman filters (EKF). The first, estimates the local path travelled by the robot while grabbing the scan as well as its uncertainty and provides position estimates for correcting the distortions that the vehicle motion produces in the acoustic images. The second is an augment state EKF that estimates and keeps the registered scans poses. The raw data from the sensors are processed and fused in-line. No priory structural information or initial pose are considered. The algorithm has been tested on an AUV guided along a 600 m path within a marina environment, showing the viability of the proposed approach

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper a colour texture segmentation method, which unifies region and boundary information, is proposed. The algorithm uses a coarse detection of the perceptual (colour and texture) edges of the image to adequately place and initialise a set of active regions. Colour texture of regions is modelled by the conjunction of non-parametric techniques of kernel density estimation (which allow to estimate the colour behaviour) and classical co-occurrence matrix based texture features. Therefore, region information is defined and accurate boundary information can be extracted to guide the segmentation process. Regions concurrently compete for the image pixels in order to segment the whole image taking both information sources into account. Furthermore, experimental results are shown which prove the performance of the proposed method

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Image segmentation of natural scenes constitutes a major problem in machine vision. This paper presents a new proposal for the image segmentation problem which has been based on the integration of edge and region information. This approach begins by detecting the main contours of the scene which are later used to guide a concurrent set of growing processes. A previous analysis of the seed pixels permits adjustment of the homogeneity criterion to the region's characteristics during the growing process. Since the high variability of regions representing outdoor scenes makes the classical homogeneity criteria useless, a new homogeneity criterion based on clustering analysis and convex hull construction is proposed. Experimental results have proven the reliability of the proposed approach

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In image processing, segmentation algorithms constitute one of the main focuses of research. In this paper, new image segmentation algorithms based on a hard version of the information bottleneck method are presented. The objective of this method is to extract a compact representation of a variable, considered the input, with minimal loss of mutual information with respect to another variable, considered the output. First, we introduce a split-and-merge algorithm based on the definition of an information channel between a set of regions (input) of the image and the intensity histogram bins (output). From this channel, the maximization of the mutual information gain is used to optimize the image partitioning. Then, the merging process of the regions obtained in the previous phase is carried out by minimizing the loss of mutual information. From the inversion of the above channel, we also present a new histogram clustering algorithm based on the minimization of the mutual information loss, where now the input variable represents the histogram bins and the output is given by the set of regions obtained from the above split-and-merge algorithm. Finally, we introduce two new clustering algorithms which show how the information bottleneck method can be applied to the registration channel obtained when two multimodal images are correctly aligned. Different experiments on 2-D and 3-D images show the behavior of the proposed algorithms

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work a fuzzy linear system is used to solve Leontief input-output model with fuzzy entries. For solving this model, we assume that the consumption matrix from di erent sectors of the economy and demand are known. These assumptions heavily depend on the information obtained from the industries. Hence uncertainties are involved in this information. The aim of this work is to model these uncertainties and to address them by fuzzy entries such as fuzzy numbers and LR-type fuzzy numbers (triangular and trapezoidal). Fuzzy linear system has been developed using fuzzy data and it is solved using Gauss-Seidel algorithm. Numerical examples show the e ciency of this algorithm. The famous example from Prof. Leontief, where he solved the production levels for U.S. economy in 1958, is also further analyzed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is presented a software developed with Delphi programming language to compute the reservoir's annual regulated active storage, based on the sequent-peak algorithm. Mathematical models used for that purpose generally require extended hydrological series. Usually, the analysis of those series is performed with spreadsheets or graphical representations. Based on that, it was developed a software for calculation of reservoir active capacity. An example calculation is shown by 30-years (from 1977 to 2009) monthly mean flow historical data, from Corrente River, located at São Francisco River Basin, Brazil. As an additional tool, an interface was developed to manage water resources, helping to manipulate data and to point out information that it would be of interest to the user. Moreover, with that interface irrigation districts where water consumption is higher can be analyzed as a function of specific seasonal water demands situations. From a practical application, it is possible to conclude that the program provides the calculation originally proposed. It was designed to keep information organized and retrievable at any time, and to show simulation on seasonal water demands throughout the year, contributing with the elements of study concerning reservoir projects. This program, with its functionality, is an important tool for decision making in the water resources management.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Connectivity depends on rates of dispersal between communities. For marine soft-sediment communities continued small-scale dispersal as post-larvae and as adults can be equally important in maintaining community composition, as initial recruitment of substrate by pelagic larvae. In this thesis post-larval dispersal strategies of benthic invertebrates, as well as mechanisms by which communities are connected were investigated. Such knowledge on dispersal is scarce, due to the difficulties in actually measuring dispersal directly in nature, and dispersal has not previously been quantified in the Baltic Sea. Different trap-types were used underwater to capture dispersing invertebrates at different sites, while in parallel measuring waves and currents. Local community composition was found to change predictably under varying rates of dispersal and physical connectivity (waves and currents). This response was, however, dependent on dispersal-related traits of taxa. Actively dispersing taxa will be relatively better at maintaining their position, as they are not as dependent on hydrodynamic conditions for dispersal and will be less prone to be passively transported by currents. Taxa also dispersed in relative proportions that were distinctly different from resident community composition and a significant proportion (40 %) of taxa were found to lack a planktonic larval life-stage. Community assembly was re-started in a large-scale manipulative field experiment over one year across several sites, which revealed how patterns of community composition (α-, β- and λ-diversity) change depending on rates of dispersal. Results also demonstrated that in response to small-scale disturbance, initial recruitment was by nearby-dominant species after which other species arrived from successively further away. At later assembly time, the number of coexisting species increased beyond what was expected purely by local niche requirements (species sorting), transferring regional differences in community composition (β-diversity) to the local scale (α-diversity, mass effect). Findings of this thesis complement more theoretical studies in metacommunity ecology by demonstrating that understanding how and when individuals disperse relative to underlying environmental heterogeneity is key to interpreting how patterns of diversity change across different spatial scales. Such information from nature is critical when predicting responses to, for example, different types of disturbances or management actions in conservation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The issue of selecting an appropriate healthcare information system is a very essential one. If implemented healthcare information system doesn’t fit particular healthcare institution, for example there are unnecessary functions; healthcare institution wastes its resources and its efficiency decreases. The purpose of this research is to develop a healthcare information system selection model to assist the decision-making process of choosing healthcare information system. Appropriate healthcare information system helps healthcare institutions to become more effective and efficient and keep up with the times. The research is based on comparison analysis of 50 healthcare information systems and 6 interviews with experts from St-Petersburg healthcare institutions that already have experience in healthcare information system utilization. 13 characteristics of healthcare information systems: 5 key and 7 additional features are identified and considered in the selection model development. Variables are used in the selection model in order to narrow the decision algorithm and to avoid duplication of brunches. The questions in the healthcare information systems selection model are designed to be easy-to-understand for common a decision-maker in healthcare institution without permanent establishment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The thesis introduced the octree and addressed the complete nature of problems encountered, while building and imaging system based on octrees. An efficient Bottom-up recursive algorithm and its iterative counterpart for the raster to octree conversion of CAT scan slices, to improve the speed of generating the octree from the slices, the possibility of utilizing the inherent parallesism in the conversion programme is explored in this thesis. The octree node, which stores the volume information in cube often stores the average density information could lead to “patchy”distribution of density during the image reconstruction. In an attempt to alleviate this problem and explored the possibility of using VQ to represent the imformation contained within a cube. Considering the ease of accommodating the process of compressing the information during the generation of octrees from CAT scan slices, proposed use of wavelet transforms to generate the compressed information in a cube. The modified algorithm for generating octrees from the slices is shown to accommodate the eavelet compression easily. Rendering the stored information in the form of octree is a complex task, necessarily because of the requirement to display the volumetric information. The reys traced from each cube in the octree, sum up the density en-route, accounting for the opacities and transparencies produced due to variations in density.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Decision trees are very powerful tools for classification in data mining tasks that involves different types of attributes. When coming to handling numeric data sets, usually they are converted first to categorical types and then classified using information gain concepts. Information gain is a very popular and useful concept which tells you, whether any benefit occurs after splitting with a given attribute as far as information content is concerned. But this process is computationally intensive for large data sets. Also popular decision tree algorithms like ID3 cannot handle numeric data sets. This paper proposes statistical variance as an alternative to information gain as well as statistical mean to split attributes in completely numerical data sets. The new algorithm has been proved to be competent with respect to its information gain counterpart C4.5 and competent with many existing decision tree algorithms against the standard UCI benchmarking datasets using the ANOVA test in statistics. The specific advantages of this proposed new algorithm are that it avoids the computational overhead of information gain computation for large data sets with many attributes, as well as it avoids the conversion to categorical data from huge numeric data sets which also is a time consuming task. So as a summary, huge numeric datasets can be directly submitted to this algorithm without any attribute mappings or information gain computations. It also blends the two closely related fields statistics and data mining

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work proposes a parallel genetic algorithm for compressing scanned document images. A fitness function is designed with Hausdorff distance which determines the terminating condition. The algorithm helps to locate the text lines. A greater compression ratio has achieved with lesser distortion