897 resultados para Mesh generation from image data


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Genome-wide association studies have been instrumental in identifying genetic variants associated with complex traits such as human disease or gene expression phenotypes. It has been proposed that extending existing analysis methods by considering interactions between pairs of loci may uncover additional genetic effects. However, the large number of possible two-marker tests presents significant computational and statistical challenges. Although several strategies to detect epistasis effects have been proposed and tested for specific phenotypes, so far there has been no systematic attempt to compare their performance using real data. We made use of thousands of gene expression traits from linkage and eQTL studies, to compare the performance of different strategies. We found that using information from marginal associations between markers and phenotypes to detect epistatic effects yielded a lower false discovery rate (FDR) than a strategy solely using biological annotation in yeast, whereas results from human data were inconclusive. For future studies whose aim is to discover epistatic effects, we recommend incorporating information about marginal associations between SNPs and phenotypes instead of relying solely on biological annotation. Improved methods to discover epistatic effects will result in a more complete understanding of complex genetic effects.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In October 1998, Hurricane Mitch triggered numerous landslides (mainly debris flows) in Honduras and Nicaragua, resulting in a high death toll and in considerable damage to property. The potential application of relatively simple and affordable spatial prediction models for landslide hazard mapping in developing countries was studied. Our attention was focused on a region in NW Nicaragua, one of the most severely hit places during the Mitch event. A landslide map was obtained at 1:10 000 scale in a Geographic Information System (GIS) environment from the interpretation of aerial photographs and detailed field work. In this map the terrain failure zones were distinguished from the areas within the reach of the mobilized materials. A Digital Elevation Model (DEM) with 20 m×20 m of pixel size was also employed in the study area. A comparative analysis of the terrain failures caused by Hurricane Mitch and a selection of 4 terrain factors extracted from the DEM which, contributed to the terrain instability, was carried out. Land propensity to failure was determined with the aid of a bivariate analysis and GIS tools in a terrain failure susceptibility map. In order to estimate the areas that could be affected by the path or deposition of the mobilized materials, we considered the fact that under intense rainfall events debris flows tend to travel long distances following the maximum slope and merging with the drainage network. Using the TauDEM extension for ArcGIS software we generated automatically flow lines following the maximum slope in the DEM starting from the areas prone to failure in the terrain failure susceptibility map. The areas crossed by the flow lines from each terrain failure susceptibility class correspond to the runout susceptibility classes represented in a runout susceptibility map. The study of terrain failure and runout susceptibility enabled us to obtain a spatial prediction for landslides, which could contribute to landslide risk mitigation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Here we present a 30 000 years low-resolution climate record reconstructed from groundwater data. The investigated site is located in the Bohemian Cretaceous Basin, in the corridor between the Scandinavian ice sheet and the Alpine ice field. Noble gas temperatures (NGT), obtained from groundwater data, preserved multicentennial temperature variability and indicated a cooling of at least 5-7 °C during the last glacial maximum (LGM). This is further confirmed by the depleted δ18O and δ2H values at the LGM. High excess air (ΔNe) at the end of the Pleistocene is possibly related to abrupt changes in recharge dynamics due to progression and retreat of ice covers and permafrost. These results agree with the fact that during the LGM permafrost and small glaciers developed in the inner valleys of the Giant Mountains (located in the watershed of the aquifers). A temporal decrease of deuterium excess from the pre-industrial Holocene to present days is linked to an increase of the air temperatures, and probably also to an increase of water pressure at the source region of precipitation over the past few hundred years

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We provide evidence that Formica paralugubris Seifert, 1996, a species of wood and recently described from Switzerland, is present in the Italian Alps. Until 1996, this species was confounded with F.lugubris Zetterstedt, 1838. We examine the wood and collection deposited at the University of Pavia (Italy) and collect new samples within the Italian Alps. Formica paralugubris seems to be more abundant than F.lugubris. Moreover, both species are found in sympatry in some localities

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article reports on a lossless data hiding scheme for digital images where the data hiding capacity is either determined by minimum acceptable subjective quality or by the demanded capacity. In the proposed method data is hidden within the image prediction errors, where the most well-known prediction algorithms such as the median edge detector (MED), gradient adjacent prediction (GAP) and Jiang prediction are tested for this purpose. In this method, first the histogram of the prediction errors of images are computed and then based on the required capacity or desired image quality, the prediction error values of frequencies larger than this capacity are shifted. The empty space created by such a shift is used for embedding the data. Experimental results show distinct superiority of the image prediction error histogram over the conventional image histogram itself, due to much narrower spectrum of the former over the latter. We have also devised an adaptive method for hiding data, where subjective quality is traded for data hiding capacity. Here the positive and negative error values are chosen such that the sum of their frequencies on the histogram is just above the given capacity or above a certain quality.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This letter presents a lossless data hiding scheme for digital images which uses an edge detector to locate plain areas for embedding. The proposed method takes advantage of the well-known gradient adjacent prediction utilized in image coding. In the suggested scheme, prediction errors and edge values are first computed and then, excluding the edge pixels, prediction error values are slightly modified through shifting the prediction errors to embed data. The aim of proposed scheme is to decrease the amount of modified pixels to improve transparency by keeping edge pixel values of the image. The experimental results have demonstrated that the proposed method is capable of hiding more secret data than the known techniques at the same PSNR, thus proving that using edge detector to locate plain areas for lossless data embedding can enhance the performance in terms of data embedding rate versus the PSNR of marked images with respect to original image.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An increasing number of studies in recent years have sought to identify individual inventors from patent data. A variety of heuristics have been proposed for using the names and other information disclosed in patent documents to establish who is who in patents. This paper contributes to this literature by describing a methodology for identifying inventors using patents applied to the European Patent Office, EPO hereafter. As in much of this literature, we basically follow a threestep procedure : 1- the parsing stage, aimed at reducing the noise in the inventor’s name and other fields of the patent; 2- the matching stage, where name matching algorithms are used to group similar names; and 3- the filtering stage, where additional information and various scoring schemes are used to filter out these similarlynamed inventors. The paper presents the results obtained by using the algorithms with the set of European inventors applying to the EPO over a long period of time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coffee production was closely linked to the economic development of Brazil and, even today, coffee is an important product of the national agriculture. The State of Minas Gerais currently accounts for 52% of the whole coffee area in Brazil. Remote sensing data can provide information for monitoring and mapping of coffee crops, faster and cheaper than conventional methods. In this context, the objective of this study was to assess the effectiveness of coffee crop mapping in Monte Santo de Minas municipality, Minas Gerais State, Brazil, from fraction images derived from MODIS data, in both dry and rainy seasons. The Spectral Linear Mixing Model was used to derive fraction images of soil, coffee, and water/shade. These fraction images served as input data for the supervised automatic classification using the SVM - Support Vector Machine approach. The best results concerning Overall Accuracy and Kappa Index were obtained in the classification of the dry season, with 67% and 0.41, respectively.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this study was to predict by means of Artificial Neural Network (ANN), multilayer perceptrons, the texture attributes of light cheesecurds perceived by trained judges based on instrumental texture measurements. Inputs to the network were the instrumental texture measurements of light cheesecurd (imitative and fundamental parameters). Output variables were the sensory attributes consistency and spreadability. Nine light cheesecurd formulations composed of different combinations of fat and water were evaluated. The measurements obtained by the instrumental and sensory analyses of these formulations constituted the data set used for training and validation of the network. Network training was performed using a back-propagation algorithm. The network architecture selected was composed of 8-3-9-2 neurons in its layers, which quickly and accurately predicted the sensory texture attributes studied, showing a high correlation between the predicted and experimental values for the validation data set and excellent generalization ability, with a validation RMSE of 0.0506.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Knowledge discovery in databases is the non-trivial process of identifying valid, novel potentially useful and ultimately understandable patterns from data. The term Data mining refers to the process which does the exploratory analysis on the data and builds some model on the data. To infer patterns from data, data mining involves different approaches like association rule mining, classification techniques or clustering techniques. Among the many data mining techniques, clustering plays a major role, since it helps to group the related data for assessing properties and drawing conclusions. Most of the clustering algorithms act on a dataset with uniform format, since the similarity or dissimilarity between the data points is a significant factor in finding out the clusters. If a dataset consists of mixed attributes, i.e. a combination of numerical and categorical variables, a preferred approach is to convert different formats into a uniform format. The research study explores the various techniques to convert the mixed data sets to a numerical equivalent, so as to make it equipped for applying the statistical and similar algorithms. The results of clustering mixed category data after conversion to numeric data type have been demonstrated using a crime data set. The thesis also proposes an extension to the well known algorithm for handling mixed data types, to deal with data sets having only categorical data. The proposed conversion has been validated on a data set corresponding to breast cancer. Moreover, another issue with the clustering process is the visualization of output. Different geometric techniques like scatter plot, or projection plots are available, but none of the techniques display the result projecting the whole database but rather demonstrate attribute-pair wise analysis

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Analysis by reduction is a linguistically motivated method for checking correctness of a sentence. It can be modelled by restarting automata. In this paper we propose a method for learning restarting automata which are strictly locally testable (SLT-R-automata). The method is based on the concept of identification in the limit from positive examples only. Also we characterize the class of languages accepted by SLT-R-automata with respect to the Chomsky hierarchy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Real-world learning tasks often involve high-dimensional data sets with complex patterns of missing features. In this paper we review the problem of learning from incomplete data from two statistical perspectives---the likelihood-based and the Bayesian. The goal is two-fold: to place current neural network approaches to missing data within a statistical framework, and to describe a set of algorithms, derived from the likelihood-based framework, that handle clustering, classification, and function approximation from incomplete data in a principled and efficient manner. These algorithms are based on mixture modeling and make two distinct appeals to the Expectation-Maximization (EM) principle (Dempster, Laird, and Rubin 1977)---both for the estimation of mixture components and for coping with the missing data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we face the problem of positioning a camera attached to the end-effector of a robotic manipulator so that it gets parallel to a planar object. Such problem has been treated for a long time in visual servoing. Our approach is based on linking to the camera several laser pointers so that its configuration is aimed to produce a suitable set of visual features. The aim of using structured light is not only for easing the image processing and to allow low-textured objects to be treated, but also for producing a control scheme with nice properties like decoupling, stability, well conditioning and good camera trajectory