986 resultados para R-Statistical computing


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Since the beginning of 3D computer vision problems, the use of techniques to reduce the data to make it treatable preserving the important aspects of the scene has been necessary. Currently, with the new low-cost RGB-D sensors, which provide a stream of color and 3D data of approximately 30 frames per second, this is getting more relevance. Many applications make use of these sensors and need a preprocessing to downsample the data in order to either reduce the processing time or improve the data (e.g., reducing noise or enhancing the important features). In this paper, we present a comparison of different downsampling techniques which are based on different principles. Concretely, five different downsampling methods are included: a bilinear-based method, a normal-based, a color-based, a combination of the normal and color-based samplings, and a growing neural gas (GNG)-based approach. For the comparison, two different models have been used acquired with the Blensor software. Moreover, to evaluate the effect of the downsampling in a real application, a 3D non-rigid registration is performed with the data sampled. From the experimentation we can conclude that depending on the purpose of the application some kernels of the sampling methods can improve drastically the results. Bilinear- and GNG-based methods provide homogeneous point clouds, but color-based and normal-based provide datasets with higher density of points in areas with specific features. In the non-rigid application, if a color-based sampled point cloud is used, it is possible to properly register two datasets for cases where intensity data are relevant in the model and outperform the results if only a homogeneous sampling is used.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In many classification problems, it is necessary to consider the specific location of an n-dimensional space from which features have been calculated. For example, considering the location of features extracted from specific areas of a two-dimensional space, as an image, could improve the understanding of a scene for a video surveillance system. In the same way, the same features extracted from different locations could mean different actions for a 3D HCI system. In this paper, we present a self-organizing feature map able to preserve the topology of locations of an n-dimensional space in which the vector of features have been extracted. The main contribution is to implicitly preserving the topology of the original space because considering the locations of the extracted features and their topology could ease the solution to certain problems. Specifically, the paper proposes the n-dimensional constrained self-organizing map preserving the input topology (nD-SOM-PINT). Features in adjacent areas of the n-dimensional space, used to extract the feature vectors, are explicitly in adjacent areas of the nD-SOM-PINT constraining the neural network structure and learning. As a study case, the neural network has been instantiate to represent and classify features as trajectories extracted from a sequence of images into a high level of semantic understanding. Experiments have been thoroughly carried out using the CAVIAR datasets (Corridor, Frontal and Inria) taken into account the global behaviour of an individual in order to validate the ability to preserve the topology of the two-dimensional space to obtain high-performance classification for trajectory classification in contrast of non-considering the location of features. Moreover, a brief example has been included to focus on validate the nD-SOM-PINT proposal in other domain than the individual trajectory. Results confirm the high accuracy of the nD-SOM-PINT outperforming previous methods aimed to classify the same datasets.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Although many of the molecular interactions in kidney development are now well understood, the molecules involved in the specification of the metanephric mesenchyme from surrounding intermediate mesoderm and, hence, the formation of the renal progenitor population are poorly characterized. In this study, cDNA microarrays were used to identify genes enriched in the murine embryonic day 10.5 (E10.5) uninduced metanephric mesenchyme, the renal progenitor population, in comparison with more rostral derivatives of the intermediate mesoderm. Microarray data were analyzed using R statistical software to determine accurately genes differentially expressed between these populations. Microarray outliers were biologically verified, and the spatial expression pattern of these genes at E10.5 and subsequent stages of early kidney development was determined by RNA in situ hybridization. This approach identified 21 genes preferentially expressed by the E10.5 metanephric mesenchyme, including Ewing sarcoma homolog, 14-3-3 theta, retinoic acid receptor-alpha, stearoyl-CoA desaturase 2, CD24, and cadherin-11, that may be important in formation of renal progenitor cells. Cell surface proteins such as CD24 and cadherin-11 that were strongly and specifically expressed in the uninduced metanephric mesenchyme and mark the renal progenitor population may prove useful in the purification of renal progenitor cells by FACS. These findings may assist in the isolation and characterization of potential renal stem cells for use in cellular therapies for kidney disease.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present and evaluate a novel supervised recurrent neural network architecture, the SARASOM, based on the associative self-organizing map. The performance of the SARASOM is evaluated and compared with the Elman network as well as with a hidden Markov model (HMM) in a number of prediction tasks using sequences of letters, including some experiments with a reduced lexicon of 15 words. The results were very encouraging with the SARASOM learning better and performing with better accuracy than both the Elman network and the HMM.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Produced water is characterized as one of the most common wastes generated during exploration and production of oil. This work aims to develop methodologies based on comparative statistical processes of hydrogeochemical analysis of production zones in order to minimize types of high-cost interventions to perform identification test fluids - TIF. For the study, 27 samples were collected from five different production zones were measured a total of 50 chemical species. After the chemical analysis was applied the statistical data, using the R Statistical Software, version 2.11.1. Statistical analysis was performed in three steps. In the first stage, the objective was to investigate the behavior of chemical species under study in each area of production through the descriptive graphical analysis. The second step was to identify a function that classify production zones from each sample, using discriminant analysis. In the training stage, the rate of correct classification function of discriminant analysis was 85.19%. The next stage of processing of the data used for Principal Component Analysis, by reducing the number of variables obtained from the linear combination of chemical species, try to improve the discriminant function obtained in the second stage and increase the discrimination power of the data, but the result was not satisfactory. In Profile Analysis curves were obtained for each production area, based on the characteristics of the chemical species present in each zone. With this study it was possible to develop a method using hydrochemistry and statistical analysis that can be used to distinguish the water produced in mature fields of oil, so that it is possible to identify the zone of production that is contributing to the excessive elevation of the water volume.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Presenta en forma sintetica un conjunto de sistemas computacionales que se utilizan actualmente en tareas estadisticas en los paises desarrollados, y describe las actividades que la CEPAL esta desarrollando en este campo y las posibilidades de cooperacion regional en que esta empenada.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The main problem with current approaches to quantum computing is the difficulty of establishing and maintaining entanglement. A Topological Quantum Computer (TQC) aims to overcome this by using different physical processes that are topological in nature and which are less susceptible to disturbance by the environment. In a (2+1)-dimensional system, pseudoparticles called anyons have statistics that fall somewhere between bosons and fermions. The exchange of two anyons, an effect called braiding from knot theory, can occur in two different ways. The quantum states corresponding to the two elementary braids constitute a two-state system allowing the definition of a computational basis. Quantum gates can be built up from patterns of braids and for quantum computing it is essential that the operator describing the braiding-the R-matrix-be described by a unitary operator. The physics of anyonic systems is governed by quantum groups, in particular the quasi-triangular Hopf algebras obtained from finite groups by the application of the Drinfeld quantum double construction. Their representation theory has been described in detail by Gould and Tsohantjis, and in this review article we relate the work of Gould to TQC schemes, particularly that of Kauffman.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Exploratory factor analysis is a widely used statistical technique in the social sciences. It attempts to identify underlying factors that explain the pattern of correlations within a set of observed variables. A statistical software package is needed to perform the calcula- tions. However, there are some limitations with popular statistical software packages, like SPSS. The R programming language is a free software package for statistical and graphical computing. It o ers many packages written by contributors from all over the world and programming resources that allow it to overcome the dialog limitations of SPSS. This paper o ers an SPSS dialog written in the R programming language with the help of some packages, so that researchers with little or no knowledge in programming, or those who are accustomed to making their calculations based on statistical dialogs, have more options when applying factor analysis to their data and hence can adopt a better approach when dealing with ordinal, Likert-type data.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

R from http://www.r-project.org/ is ‘GNU S’ – a language and environment for statistical computingand graphics. The environment in which many classical and modern statistical techniques havebeen implemented, but many are supplied as packages. There are 8 standard packages and many moreare available through the cran family of Internet sites http://cran.r-project.org .We started to develop a library of functions in R to support the analysis of mixtures and our goal isa MixeR package for compositional data analysis that provides support foroperations on compositions: perturbation and power multiplication, subcomposition with or withoutresiduals, centering of the data, computing Aitchison’s, Euclidean, Bhattacharyya distances,compositional Kullback-Leibler divergence etc.graphical presentation of compositions in ternary diagrams and tetrahedrons with additional features:barycenter, geometric mean of the data set, the percentiles lines, marking and coloring ofsubsets of the data set, theirs geometric means, notation of individual data in the set . . .dealing with zeros and missing values in compositional data sets with R procedures for simpleand multiplicative replacement strategy,the time series analysis of compositional data.We’ll present the current status of MixeR development and illustrate its use on selected data sets

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper uses Colombian household survey data collected over the period 1984-2005 to estimate Gini coe¢ cients along with their corresponding standard errors. We Önd a statistically signiÖcant increase in wage income inequality following the adoption of the liberalisation measures of the early 1990s, and mixed evidence during the recovery years that followed the economic recession of the late 1990s. We also Önd that in several cases the observed di§erences in the Gini coe¢ cients across cities have not been statistically signiÖcant.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We outline our first steps towards marrying two new and emerging technologies; the Virtual Observatory (e.g, Astro- Grid) and the computational grid. We discuss the construction of VOTechBroker, which is a modular software tool designed to abstract the tasks of submission and management of a large number of computational jobs to a distributed computer system. The broker will also interact with the AstroGrid workflow and MySpace environments. We present our planned usage of the VOTechBroker in computing a huge number of n–point correlation functions from the SDSS, as well as fitting over a million CMBfast models to the WMAP data.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The conductor-discriminant formula, namely, the Hasse Theorem, states that if a number field K is fixed by a subgroup H of Gal(ℚ(ζn)/ℚ), the discriminant of K can be obtained from H by computing the product of the conductors of all characters defined modulo n which are associated to K. By calculating these conductors explicitly, we derive a formula to compute the discriminant of any subfield of ℚ(ζpr), where p is an odd rime and r is a positive integer. © 2002 Elsevier Science USA.