940 resultados para Graph Decomposition
Resumo:
Esta Tesis se centra en el desarrollo de un método para la reconstrucción de bases de datos experimentales incompletas de más de dos dimensiones. Como idea general, consiste en la aplicación iterativa de la descomposición en valores singulares de alto orden sobre la base de datos incompleta. Este nuevo método se inspira en el que ha servido de base para la reconstrucción de huecos en bases de datos bidimensionales inventado por Everson y Sirovich (1995) que a su vez, ha sido mejorado por Beckers y Rixen (2003) y simultáneamente por Venturi y Karniadakis (2004). Además, se ha previsto la adaptación de este nuevo método para tratar el posible ruido característico de bases de datos experimentales y a su vez, bases de datos estructuradas cuya información no forma un hiperrectángulo perfecto. Se usará una base de datos tridimensional de muestra como modelo, obtenida a través de una función transcendental, para calibrar e ilustrar el método. A continuación se detalla un exhaustivo estudio del funcionamiento del método y sus variantes para distintas bases de datos aerodinámicas. En concreto, se usarán tres bases de datos tridimensionales que contienen la distribución de presiones sobre un ala. Una se ha generado a través de un método semi-analítico con la intención de estudiar distintos tipos de discretizaciones espaciales. El resto resultan de dos modelos numéricos calculados en C F D . Por último, el método se aplica a una base de datos experimental de más de tres dimensiones que contiene la medida de fuerzas de una configuración ala de Prandtl obtenida de una campaña de ensayos en túnel de viento, donde se estudiaba un amplio espacio de parámetros geométricos de la configuración que como resultado ha generado una base de datos donde la información está dispersa. ABSTRACT A method based on an iterative application of high order singular value decomposition is derived for the reconstruction of missing data in multidimensional databases. The method is inspired by a seminal gappy reconstruction method for two-dimensional databases invented by Everson and Sirovich (1995) and improved by Beckers and Rixen (2003) and Venturi and Karniadakis (2004). In addition, the method is adapted to treat both noisy and structured-but-nonrectangular databases. The method is calibrated and illustrated using a three-dimensional toy model database that is obtained by discretizing a transcendental function. The performance of the method is tested on three aerodynamic databases for the flow past a wing, one obtained by a semi-analytical method, and two resulting from computational fluid dynamics. The method is finally applied to an experimental database consisting in a non-exhaustive parameter space measurement of forces for a box-wing configuration.
Resumo:
Esta Tesis presenta un nuevo método para filtrar errores en bases de datos multidimensionales. Este método no precisa ninguna información a priori sobre la naturaleza de los errores. En concreto, los errrores no deben ser necesariamente pequeños, ni de distribución aleatoria ni tener media cero. El único requerimiento es que no estén correlados con la información limpia propia de la base de datos. Este nuevo método se basa en una extensión mejorada del método básico de reconstrucción de huecos (capaz de reconstruir la información que falta de una base de datos multidimensional en posiciones conocidas) inventado por Everson y Sirovich (1995). El método de reconstrucción de huecos mejorado ha evolucionado como un método de filtrado de errores de dos pasos: en primer lugar, (a) identifica las posiciones en la base de datos afectadas por los errores y después, (b) reconstruye la información en dichas posiciones tratando la información de éstas como información desconocida. El método resultante filtra errores O(1) de forma eficiente, tanto si son errores aleatorios como sistemáticos e incluso si su distribución en la base de datos está concentrada o esparcida por ella. Primero, se ilustra el funcionamiento delmétodo con una base de datosmodelo bidimensional, que resulta de la dicretización de una función transcendental. Posteriormente, se presentan algunos casos prácticos de aplicación del método a dos bases de datos tridimensionales aerodinámicas que contienen la distribución de presiones sobre un ala a varios ángulos de ataque. Estas bases de datos resultan de modelos numéricos calculados en CFD. ABSTRACT A method is presented to filter errors out in multidimensional databases. The method does not require any a priori information about the nature the errors. In particular, the errors need not to be small, neither random, nor exhibit zero mean. Instead, they are only required to be relatively uncorrelated to the clean information contained in the database. The method is based on an improved extension of a seminal iterative gappy reconstruction method (able to reconstruct lost information at known positions in the database) due to Everson and Sirovich (1995). The improved gappy reconstruction method is evolved as an error filtering method in two steps, since it is adapted to first (a) identify the error locations in the database and then (b) reconstruct the information in these locations by treating the associated data as gappy data. The resultingmethod filters out O(1) errors in an efficient fashion, both when these are random and when they are systematic, and also both when they concentrated and when they are spread along the database. The performance of the method is first illustrated using a two-dimensional toymodel database resulting fromdiscretizing a transcendental function and then tested on two CFD-calculated, three-dimensional aerodynamic databases containing the pressure coefficient on the surface of a wing for varying values of the angle of attack. A more general performance analysis of the method is presented with the intention of quantifying the randomness factor the method admits maintaining a correct performance and secondly, quantifying the size of error the method can detect. Lastly, some improvements of the method are proposed with their respective verification.
Resumo:
A novel pedestrian motion prediction technique is presented in this paper. Its main achievement regards to none previous observation, any knowledge of pedestrian trajectories nor the existence of possible destinations is required; hence making it useful for autonomous surveillance applications. Prediction only requires initial position of the pedestrian and a 2D representation of the scenario as occupancy grid. First, it uses the Fast Marching Method (FMM) to calculate the pedestrian arrival time for each position in the map and then, the likelihood that the pedestrian reaches those positions is estimated. The technique has been tested with synthetic and real scenarios. In all cases, accurate probability maps as well as their representative graphs were obtained with low computational cost.
Resumo:
In a recent article [Khan, A. U., Kovacic, D., Kolbanovsky, A., Desai, M., Frenkel, K. & Geacintov, N. E. (2000) Proc. Natl. Acad. Sci. USA 97, 2984–2989], the authors claimed that ONOO−, after protonation to ONOOH, decomposes into 1HNO and 1O2 according to a spin-conserved unimolecular mechanism. This claim was based partially on their observation that nitrosylhemoglobin is formed via the reaction of peroxynitrite with methemoglobin at neutral pH. However, thermochemical considerations show that the yields of 1O2 and 1HNO are about 23 orders of magnitude lower than those of ⋅NO2 and ⋅OH, which are formed via the homolysis of ONOOH. We also show that methemoglobin does not form with peroxynitrite any spectrally detectable product, but with contaminations of nitrite and H2O2 present in the peroxynitrite sample. Thus, there is no need to modify the present view of the mechanism of ONOOH decomposition, according to which initial homolysis into a radical pair, [ONO⋅ ⋅OH]cage, is followed by the diffusion of about 30% of the radicals out of the cage, while the rest recombines to nitric acid in the solvent cage.
Resumo:
We describe the use of singular value decomposition in transforming genome-wide expression data from genes × arrays space to reduced diagonalized “eigengenes” × “eigenarrays” space, where the eigengenes (or eigenarrays) are unique orthonormal superpositions of the genes (or arrays). Normalizing the data by filtering out the eigengenes (and eigenarrays) that are inferred to represent noise or experimental artifacts enables meaningful comparison of the expression of different genes across different arrays in different experiments. Sorting the data according to the eigengenes and eigenarrays gives a global picture of the dynamics of gene expression, in which individual genes and arrays appear to be classified into groups of similar regulation and function, or similar cellular state and biological phenotype, respectively. After normalization and sorting, the significant eigengenes and eigenarrays can be associated with observed genome-wide effects of regulators, or with measured samples, in which these regulators are overactive or underactive, respectively.
Resumo:
The existence of the RNA world, in which RNA acted as a catalyst as well as an informational macromolecule, assumes a large prebiotic source of ribose or the existence of pre-RNA molecules with backbones different from ribose-phosphate. The generally accepted prebiotic synthesis of ribose, the formose reaction, yields numerous sugars without any selectivity. Even if there were a selective synthesis of ribose, there is still the problem of stability. Sugars are known to be unstable in strong acid or base, but there are few data for neutral solutions. Therefore, we have measured the rate of decomposition of ribose between pH 4 and pH 8 from 40 degrees C to 120 degrees C. The ribose half-lives are very short (73 min at pH 7.0 and 100 degrees C and 44 years at pH 7.0 and 0 degrees C). The other aldopentoses and aldohexoses have half-lives within an order of magnitude of these values, as do 2-deoxyribose, ribose 5-phosphate, and ribose 2,4-bisphosphate. These results suggest that the backbone of the first genetic material could not have contained ribose or other sugars because of their instability.
Resumo:
Uma imagem engloba informação que precisa ser organizada para interpretar e compreender seu conteúdo. Existem diversas técnicas computacionais para extrair a principal informação de uma imagem e podem ser divididas em três áreas: análise de cor, textura e forma. Uma das principais delas é a análise de forma, por descrever características de objetos baseadas em seus pontos fronteira. Propomos um método de caracterização de imagens, por meio da análise de forma, baseada nas propriedades espectrais do laplaciano em grafos. O procedimento construiu grafos G baseados nos pontos fronteira do objeto, cujas conexões entre vértices são determinadas por limiares T_l. A partir dos grafos obtêm-se a matriz de adjacência A e a matriz de graus D, as quais definem a matriz Laplaciana L=D -A. A decomposição espectral da matriz Laplaciana (autovalores) é investigada para descrever características das imagens. Duas abordagens são consideradas: a) Análise do vetor característico baseado em limiares e a histogramas, considera dois parâmetros o intervalo de classes IC_l e o limiar T_l; b) Análise do vetor característico baseado em vários limiares para autovalores fixos; os quais representam o segundo e último autovalor da matriz L. As técnicas foram testada em três coleções de imagens: sintéticas (Genéricas), parasitas intestinais (SADPI) e folhas de plantas (CNShape), cada uma destas com suas próprias características e desafios. Na avaliação dos resultados, empregamos o modelo de classificação support vector machine (SVM), o qual avalia nossas abordagens, determinando o índice de separação das categorias. A primeira abordagem obteve um acerto de 90 % com a coleção de imagens Genéricas, 88 % na coleção SADPI, e 72 % na coleção CNShape. Na segunda abordagem, obtém-se uma taxa de acerto de 97 % com a coleção de imagens Genéricas; 83 % para SADPI e 86 % no CNShape. Os resultados mostram que a classificação de imagens a partir do espectro do Laplaciano, consegue categorizá-las satisfatoriamente.
Resumo:
Electroencephalographic (EEG) signals of the human brains represent electrical activities for a number of channels recorded over a the scalp. The main purpose of this thesis is to investigate the interactions and causality of different parts of a brain using EEG signals recorded during a performance subjects of verbal fluency tasks. Subjects who have Parkinson's Disease (PD) have difficulties with mental tasks, such as switching between one behavior task and another. The behavior tasks include phonemic fluency, semantic fluency, category semantic fluency and reading fluency. This method uses verbal generation skills, activating different Broca's areas of the Brodmann's areas (BA44 and BA45). Advanced signal processing techniques are used in order to determine the activated frequency bands in the granger causality for verbal fluency tasks. The graph learning technique for channel strength is used to characterize the complex graph of Granger causality. Also, the support vector machine (SVM) method is used for training a classifier between two subjects with PD and two healthy controls. Neural data from the study was recorded at the Colorado Neurological Institute (CNI). The study reveals significant difference between PD subjects and healthy controls in terms of brain connectivities in the Broca's Area BA44 and BA45 corresponding to EEG electrodes. The results in this thesis also demonstrate the possibility to classify based on the flow of information and causality in the brain of verbal fluency tasks. These methods have the potential to be applied in the future to identify pathological information flow and causality of neurological diseases.
Resumo:
Comunicación presentada en el XI Workshop of Physical Agents, Valencia, 9-10 septiembre 2010.
Resumo:
For many years, humans and machines have shared the same physical space. To facilitate their interaction with humans, their social integration and for more rational behavior has been sought that the robots demonstrate human-like behavior. For this it is necessary to understand how human behavior is generated, discuss what tasks are performed and how relate to themselves, for subsequent implementation in robots. In this paper, we propose a model of competencies based on human neuroregulator system for analysis and decomposition of behavior into functional modules. Using this model allow separate and locate the tasks to be implemented in a robot that displays human-like behavior. As an example, we show the application of model to the autonomous movement behavior on unfamiliar environments and its implementation in various simulated and real robots with different physical configurations and physical devices of different nature. The main result of this work has been to build a model of competencies that is being used to build robotic systems capable of displaying behaviors similar to humans and consider the specific characteristics of robots.
Resumo:
Support for this work was provided by the Generalitat Valenciana (Spain) with projects PROMETEO/2009/043/FEDER, and by the Spanish MCT CTQ2008-05520.
Resumo:
The pyrolysis and combustion of corn stover were studied by dynamic thermogravimetry and derivate thermogravimetry (TG-DTG) at heating rates of 5, 10, 20 and 50 K min−1 at atmospheric pressure. For the simulation of pyrolysis and combustion processes a kinetic model based on the distribution of activation energies was used, with three pools of reactants (three pseudocomponents) because of the complexity of the biomass samples of agricultural origin. The experimental thermogravimetric data of pyrolysis and combustion processes were simultaneously fitted to determine a single set of kinetic parameters able to describe both processes at the different heating rates. The model proposed achieves a good correlation between the experimental and calculated curves, with an error of less than 4% for fitting four heating rates simultaneously. The experimental results and kinetic parameters may provide useful data for the design of thermo decomposition processing system using corn stover as feedstock. On the other hand, analysis of the main compounds in the evolved gas is given by means of a microcromatograph.
Resumo:
The pyrolysis of a sludge produced in the waste water treatment plant of an oil refinery was studied in a pilot plant reactor provided with a system for condensation of semivolatile matter. The study comprises experiments at 350, 400, 470 and 530 °C in nitrogen atmosphere. Analysis of all the products obtained (gases, liquids and chars) are presented, with a thermogravimetric study of the char produced and analysis of main components of the liquid. In the temperature range studied, the composition of the gas fraction does not appreciably vary. In the liquids, the light hidrocarbon yield increases with increasing temperature, whereas the aromatic compounds diminish. The decomposition of the solid fraction has been analysed, finding a material that reacts rapidly with oxygen regardless of the conditions it is formed.
Resumo:
A nonempty set F is called Motzkin decomposable when it can be expressed as the Minkowski sum of a compact convex set C with a closed convex cone D. In that case, the sets C and D are called compact and conic components of F. This paper provides new characterizations of the Motzkin decomposable sets involving truncations of F (i.e., intersections of FF with closed halfspaces), when F contains no lines, and truncations of the intersection F̂ of F with the orthogonal complement of the lineality of F, otherwise. In particular, it is shown that a nonempty closed convex set F is Motzkin decomposable if and only if there exists a hyperplane H parallel to the lineality of F such that one of the truncations of F̂ induced by H is compact whereas the other one is a union of closed halflines emanating from H. Thus, any Motzkin decomposable set F can be expressed as F=C+D, where the compact component C is a truncation of F̂. These Motzkin decompositions are said to be of type T when F contains no lines, i.e., when C is a truncation of F. The minimality of this type of decompositions is also discussed.