866 resultados para High-Dimensional Space Geometrical Informatics (HDSGI)
Resumo:
We study, in a d-dimensional space-time, the nonanalyticity of the thermal free energy in the scalar phi(4) theory as well as in QED. We find that the infrared divergent contributions induce, when d is even, a nonanalyticity in the coupling alpha of the form (alpha)((d-1)/2) whereas when d is odd the nonanalyticity is only logarithmic.
Resumo:
We report a morphology-based approach for the automatic identification of outlier neurons, as well as its application to the NeuroMorpho.org database, with more than 5,000 neurons. Each neuron in a given analysis is represented by a feature vector composed of 20 measurements, which are then projected into a two-dimensional space by applying principal component analysis. Bivariate kernel density estimation is then used to obtain the probability distribution for the group of cells, so that the cells with highest probabilities are understood as archetypes while those with the smallest probabilities are classified as outliers. The potential of the methodology is illustrated in several cases involving uniform cell types as well as cell types for specific animal species. The results provide insights regarding the distribution of cells, yielding single and multi-variate clusters, and they suggest that outlier cells tend to be more planar and tortuous. The proposed methodology can be used in several situations involving one or more categories of cells, as well as for detection of new categories and possible artifacts.
Resumo:
Thermal treatment (thermal rectification) is a process in which technological properties of wood are modified using thermal energy, the result of which is often value-added wood. Thermally treated wood takes on similar color shades to tropical woods and offers considerable resistance to destructive microorganisms and climate action, in addition to having high dimensional stability and low hygroscopicity. Wood samples of Eucalyptus grandis were subjected to various thermal treatments, as performed in presence (140ºC; 160ºC; 180ºC) or in absence of oxygen (160ºC; 180ºC; 200ºC) inside a thermal treatment chamber, and then studied as to their chemical characteristics. Increasing the maximum treatment temperatures led to a reduction in the holocellulose content of samples as a result of the degradation and volatilization of hemicelluloses, also leading to an increase in the relative lignin content. Except for glucose, all monosaccharide levels were found to decrease in samples after the thermal treatment at a maximum temperature of 200ºC. The thermal treatment above 160ºC led to increased levels of total extractives in the wood samples, probably ascribed to the emergence of low molecular weight substances as a result of thermal degradation. Overall, it was not possible to clearly determine the effect of presence or absence of oxygen in the air during thermal treatment on the chemical characteristics of the relevant wood samples.
Resumo:
Semi-supervised learning is a classification paradigm in which just a few labeled instances are available for the training process. To overcome this small amount of initial label information, the information provided by the unlabeled instances is also considered. In this paper, we propose a nature-inspired semi-supervised learning technique based on attraction forces. Instances are represented as points in a k-dimensional space, and the movement of data points is modeled as a dynamical system. As the system runs, data items with the same label cooperate with each other, and data items with different labels compete among them to attract unlabeled points by applying a specific force function. In this way, all unlabeled data items can be classified when the system reaches its stable state. Stability analysis for the proposed dynamical system is performed and some heuristics are proposed for parameter setting. Simulation results show that the proposed technique achieves good classification results on artificial data sets and is comparable to well-known semi-supervised techniques using benchmark data sets.
Resumo:
The Assimilation in the Unstable Subspace (AUS) was introduced by Trevisan and Uboldi in 2004, and developed by Trevisan, Uboldi and Carrassi, to minimize the analysis and forecast errors by exploiting the flow-dependent instabilities of the forecast-analysis cycle system, which may be thought of as a system forced by observations. In the AUS scheme the assimilation is obtained by confining the analysis increment in the unstable subspace of the forecast-analysis cycle system so that it will have the same structure of the dominant instabilities of the system. The unstable subspace is estimated by Breeding on the Data Assimilation System (BDAS). AUS- BDAS has already been tested in realistic models and observational configurations, including a Quasi-Geostrophicmodel and a high dimensional, primitive equation ocean model; the experiments include both fixed and“adaptive”observations. In these contexts, the AUS-BDAS approach greatly reduces the analysis error, with reasonable computational costs for data assimilation with respect, for example, to a prohibitive full Extended Kalman Filter. This is a follow-up study in which we revisit the AUS-BDAS approach in the more basic, highly nonlinear Lorenz 1963 convective model. We run observation system simulation experiments in a perfect model setting, and with two types of model error as well: random and systematic. In the different configurations examined, and in a perfect model setting, AUS once again shows better efficiency than other advanced data assimilation schemes. In the present study, we develop an iterative scheme that leads to a significant improvement of the overall assimilation performance with respect also to standard AUS. In particular, it boosts the efficiency of regime’s changes tracking, with a low computational cost. Other data assimilation schemes need estimates of ad hoc parameters, which have to be tuned for the specific model at hand. In Numerical Weather Prediction models, tuning of parameters — and in particular an estimate of the model error covariance matrix — may turn out to be quite difficult. Our proposed approach, instead, may be easier to implement in operational models.
Resumo:
Machine learning comprises a series of techniques for automatic extraction of meaningful information from large collections of noisy data. In many real world applications, data is naturally represented in structured form. Since traditional methods in machine learning deal with vectorial information, they require an a priori form of preprocessing. Among all the learning techniques for dealing with structured data, kernel methods are recognized to have a strong theoretical background and to be effective approaches. They do not require an explicit vectorial representation of the data in terms of features, but rely on a measure of similarity between any pair of objects of a domain, the kernel function. Designing fast and good kernel functions is a challenging problem. In the case of tree structured data two issues become relevant: kernel for trees should not be sparse and should be fast to compute. The sparsity problem arises when, given a dataset and a kernel function, most structures of the dataset are completely dissimilar to one another. In those cases the classifier has too few information for making correct predictions on unseen data. In fact, it tends to produce a discriminating function behaving as the nearest neighbour rule. Sparsity is likely to arise for some standard tree kernel functions, such as the subtree and subset tree kernel, when they are applied to datasets with node labels belonging to a large domain. A second drawback of using tree kernels is the time complexity required both in learning and classification phases. Such a complexity can sometimes prevents the kernel application in scenarios involving large amount of data. This thesis proposes three contributions for resolving the above issues of kernel for trees. A first contribution aims at creating kernel functions which adapt to the statistical properties of the dataset, thus reducing its sparsity with respect to traditional tree kernel functions. Specifically, we propose to encode the input trees by an algorithm able to project the data onto a lower dimensional space with the property that similar structures are mapped similarly. By building kernel functions on the lower dimensional representation, we are able to perform inexact matchings between different inputs in the original space. A second contribution is the proposal of a novel kernel function based on the convolution kernel framework. Convolution kernel measures the similarity of two objects in terms of the similarities of their subparts. Most convolution kernels are based on counting the number of shared substructures, partially discarding information about their position in the original structure. The kernel function we propose is, instead, especially focused on this aspect. A third contribution is devoted at reducing the computational burden related to the calculation of a kernel function between a tree and a forest of trees, which is a typical operation in the classification phase and, for some algorithms, also in the learning phase. We propose a general methodology applicable to convolution kernels. Moreover, we show an instantiation of our technique when kernels such as the subtree and subset tree kernels are employed. In those cases, Direct Acyclic Graphs can be used to compactly represent shared substructures in different trees, thus reducing the computational burden and storage requirements.
Resumo:
Die Arbeit behandelt das Problem der Skalierbarkeit von Reinforcement Lernen auf hochdimensionale und komplexe Aufgabenstellungen. Unter Reinforcement Lernen versteht man dabei eine auf approximativem Dynamischen Programmieren basierende Klasse von Lernverfahren, die speziell Anwendung in der Künstlichen Intelligenz findet und zur autonomen Steuerung simulierter Agenten oder realer Hardwareroboter in dynamischen und unwägbaren Umwelten genutzt werden kann. Dazu wird mittels Regression aus Stichproben eine Funktion bestimmt, die die Lösung einer "Optimalitätsgleichung" (Bellman) ist und aus der sich näherungsweise optimale Entscheidungen ableiten lassen. Eine große Hürde stellt dabei die Dimensionalität des Zustandsraums dar, die häufig hoch und daher traditionellen gitterbasierten Approximationsverfahren wenig zugänglich ist. Das Ziel dieser Arbeit ist es, Reinforcement Lernen durch nichtparametrisierte Funktionsapproximation (genauer, Regularisierungsnetze) auf -- im Prinzip beliebig -- hochdimensionale Probleme anwendbar zu machen. Regularisierungsnetze sind eine Verallgemeinerung von gewöhnlichen Basisfunktionsnetzen, die die gesuchte Lösung durch die Daten parametrisieren, wodurch die explizite Wahl von Knoten/Basisfunktionen entfällt und so bei hochdimensionalen Eingaben der "Fluch der Dimension" umgangen werden kann. Gleichzeitig sind Regularisierungsnetze aber auch lineare Approximatoren, die technisch einfach handhabbar sind und für die die bestehenden Konvergenzaussagen von Reinforcement Lernen Gültigkeit behalten (anders als etwa bei Feed-Forward Neuronalen Netzen). Allen diesen theoretischen Vorteilen gegenüber steht allerdings ein sehr praktisches Problem: der Rechenaufwand bei der Verwendung von Regularisierungsnetzen skaliert von Natur aus wie O(n**3), wobei n die Anzahl der Daten ist. Das ist besonders deswegen problematisch, weil bei Reinforcement Lernen der Lernprozeß online erfolgt -- die Stichproben werden von einem Agenten/Roboter erzeugt, während er mit der Umwelt interagiert. Anpassungen an der Lösung müssen daher sofort und mit wenig Rechenaufwand vorgenommen werden. Der Beitrag dieser Arbeit gliedert sich daher in zwei Teile: Im ersten Teil der Arbeit formulieren wir für Regularisierungsnetze einen effizienten Lernalgorithmus zum Lösen allgemeiner Regressionsaufgaben, der speziell auf die Anforderungen von Online-Lernen zugeschnitten ist. Unser Ansatz basiert auf der Vorgehensweise von Recursive Least-Squares, kann aber mit konstantem Zeitaufwand nicht nur neue Daten sondern auch neue Basisfunktionen in das bestehende Modell einfügen. Ermöglicht wird das durch die "Subset of Regressors" Approximation, wodurch der Kern durch eine stark reduzierte Auswahl von Trainingsdaten approximiert wird, und einer gierigen Auswahlwahlprozedur, die diese Basiselemente direkt aus dem Datenstrom zur Laufzeit selektiert. Im zweiten Teil übertragen wir diesen Algorithmus auf approximative Politik-Evaluation mittels Least-Squares basiertem Temporal-Difference Lernen, und integrieren diesen Baustein in ein Gesamtsystem zum autonomen Lernen von optimalem Verhalten. Insgesamt entwickeln wir ein in hohem Maße dateneffizientes Verfahren, das insbesondere für Lernprobleme aus der Robotik mit kontinuierlichen und hochdimensionalen Zustandsräumen sowie stochastischen Zustandsübergängen geeignet ist. Dabei sind wir nicht auf ein Modell der Umwelt angewiesen, arbeiten weitestgehend unabhängig von der Dimension des Zustandsraums, erzielen Konvergenz bereits mit relativ wenigen Agent-Umwelt Interaktionen, und können dank des effizienten Online-Algorithmus auch im Kontext zeitkritischer Echtzeitanwendungen operieren. Wir demonstrieren die Leistungsfähigkeit unseres Ansatzes anhand von zwei realistischen und komplexen Anwendungsbeispielen: dem Problem RoboCup-Keepaway, sowie der Steuerung eines (simulierten) Oktopus-Tentakels.
Electroweak precision observables and effective four-fermion interactions in warped extra dimensions
Resumo:
In this thesis, we study the phenomenology of selected observables in the context of the Randall-Sundrum scenario of a compactified warpedrnextra dimension. Gauge and matter fields are assumed to live in the whole five-dimensional space-time, while the Higgs sector is rnlocalized on the infrared boundary. An effective four-dimensional description is obtained via Kaluza-Klein decomposition of the five dimensionalrnquantum fields. The symmetry breaking effects due to the Higgs sector are treated exactly, and the decomposition of the theory is performedrnin a covariant way. We develop a formalism, which allows for a straight-forward generalization to scenarios with an extended gauge group comparedrnto the Standard Model of elementary particle physics. As an application, we study the so-called custodial Randall-Sundrum model and compare the resultsrnto that of the original formulation. rnWe present predictions for electroweak precision observables, the Higgs production cross section at the LHC, the forward-backward asymmetryrnin top-antitop production at the Tevatron, as well as the width difference, the CP-violating phase, and the semileptonic CP asymmetry in B_s decays.
Resumo:
Il presente lavoro di tesi si inserisce nell’ambito della classificazione di dati ad alta dimensionalità, sviluppando un algoritmo basato sul metodo della Discriminant Analysis. Esso classifica i campioni attraverso le variabili prese a coppie formando un network a partire da quelle che hanno una performance sufficientemente elevata. Successivamente, l’algoritmo si avvale di proprietà topologiche dei network (in particolare la ricerca di subnetwork e misure di centralità di singoli nodi) per ottenere varie signature (sottoinsiemi delle variabili iniziali) con performance ottimali di classificazione e caratterizzate da una bassa dimensionalità (dell’ordine di 101, inferiore di almeno un fattore 103 rispetto alle variabili di partenza nei problemi trattati). Per fare ciò, l’algoritmo comprende una parte di definizione del network e un’altra di selezione e riduzione della signature, calcolando ad ogni passaggio la nuova capacità di classificazione operando test di cross-validazione (k-fold o leave- one-out). Considerato l’alto numero di variabili coinvolte nei problemi trattati – dell’ordine di 104 – l’algoritmo è stato necessariamente implementato su High-Performance Computer, con lo sviluppo in parallelo delle parti più onerose del codice C++, nella fattispecie il calcolo vero e proprio del di- scriminante e il sorting finale dei risultati. L’applicazione qui studiata è a dati high-throughput in ambito genetico, riguardanti l’espressione genica a livello cellulare, settore in cui i database frequentemente sono costituiti da un numero elevato di variabili (104 −105) a fronte di un basso numero di campioni (101 −102). In campo medico-clinico, la determinazione di signature a bassa dimensionalità per la discriminazione e classificazione di campioni (e.g. sano/malato, responder/not-responder, ecc.) è un problema di fondamentale importanza, ad esempio per la messa a punto di strategie terapeutiche personalizzate per specifici sottogruppi di pazienti attraverso la realizzazione di kit diagnostici per l’analisi di profili di espressione applicabili su larga scala. L’analisi effettuata in questa tesi su vari tipi di dati reali mostra che il metodo proposto, anche in confronto ad altri metodi esistenti basati o me- no sull’approccio a network, fornisce performance ottime, tenendo conto del fatto che il metodo produce signature con elevate performance di classifica- zione e contemporaneamente mantenendo molto ridotto il numero di variabili utilizzate per questo scopo.
Resumo:
One of the most important challenges in chemistry and material science is the connection between the contents of a compound and its chemical and physical properties. In solids, these are greatly influenced by the crystal structure.rnrnThe prediction of hitherto unknown crystal structures with regard to external conditions like pressure and temperature is therefore one of the most important goals to achieve in theoretical chemistry. The stable structure of a compound is the global minimum of the potential energy surface, which is the high dimensional representation of the enthalpy of the investigated system with respect to its structural parameters. The fact that the complexity of the problem grows exponentially with the system size is the reason why it can only be solved via heuristic strategies.rnrnImprovements to the artificial bee colony method, where the local exploration of the potential energy surface is done by a high number of independent walkers, are developed and implemented. This results in an improved communication scheme between these walkers. This directs the search towards the most promising areas of the potential energy surface.rnrnThe minima hopping method uses short molecular dynamics simulations at elevated temperatures to direct the structure search from one local minimum of the potential energy surface to the next. A modification, where the local information around each minimum is extracted and used in an optimization of the search direction, is developed and implemented. Our method uses this local information to increase the probability of finding new, lower local minima. This leads to an enhanced performance in the global optimization algorithm.rnrnHydrogen is a highly relevant system, due to the possibility of finding a metallic phase and even superconductor with a high critical temperature. An application of a structure prediction method on SiH12 finds stable crystal structures in this material. Additionally, it becomes metallic at relatively low pressures.
Resumo:
In this work we study a polyenergetic and multimaterial model for the breast image reconstruction in Digital Tomosynthesis, taking into consideration the variety of the materials forming the object and the polyenergetic nature of the X-rays beam. The modelling of the problem leads to the resolution of a high-dimensional nonlinear least-squares problem that, due to its nature of inverse ill-posed problem, needs some kind of regularization. We test two main classes of methods: the Levenberg-Marquardt method (together with the Conjugate Gradient method for the computation of the descent direction) and two limited-memory BFGS-like methods (L-BFGS). We perform some experiments for different values of the regularization parameter (constant or varying at each iteration), tolerances and stop conditions. Finally, we analyse the performance of the several methods comparing relative errors, iterations number, times and the qualities of the reconstructed images.
Resumo:
The spine is a complex structure that provides motion in three directions: flexion and extension, lateral bending and axial rotation. So far, the investigation of the mechanical and kinematic behavior of the basic unit of the spine, a motion segment, is predominantly a domain of in vitro experiments on spinal loading simulators. Most existing approaches to measure spinal stiffness intraoperatively in an in vivo environment use a distractor. However, these concepts usually assume a planar loading and motion. The objective of our study was to develop and validate an apparatus, that allows to perform intraoperative in vivo measurements to determine both the applied force and the resulting motion in three dimensional space. The proposed setup combines force measurement with an instrumented distractor and motion tracking with an optoelectronic system. As the orientation of the applied force and the three dimensional motion is known, not only force-displacement, but also moment-angle relations could be determined. The validation was performed using three cadaveric lumbar ovine spines. The lateral bending stiffness of two motion segments per specimen was determined with the proposed concept and compared with the stiffness acquired on a spinal loading simulator which was considered to be gold standard. The mean values of the stiffness computed with the proposed concept were within a range of ±15% compared to data obtained with the spinal loading simulator under applied loads of less than 5 Nm.
Resumo:
Software visualizations can provide a concise overview of a complex software system. Unfortunately, as software has no physical shape, there is no `natural' mapping of software to a two-dimensional space. As a consequence most visualizations tend to use a layout in which position and distance have no meaning, and consequently layout typically diverges from one visualization to another. We propose an approach to consistent layout for software visualization, called Software Cartography, in which the position of a software artifact reflects its vocabulary, and distance corresponds to similarity of vocabulary. We use Latent Semantic Indexing (LSI) to map software artifacts to a vector space, and then use Multidimensional Scaling (MDS) to map this vector space down to two dimensions. The resulting consistent layout allows us to develop a variety of thematic software maps that express very different aspects of software while making it easy to compare them. The approach is especially suitable for comparing views of evolving software, as the vocabulary of software artifacts tends to be stable over time. We present a prototype implementation of Software Cartography, and illustrate its use with practical examples from numerous open-source case studies.
Resumo:
We carry out some computations of vector-valued Siegel modular forms of degree two, weight (k, 2) and level one, and highlight three experimental results: (1) we identify a rational eigenform in a three-dimensional space of cusp forms; (2) we observe that non-cuspidal eigenforms of level one are not always rational; (3) we verify a number of cases of conjectures about congruences between classical modular forms and Siegel modular forms. Our approach is based on Satoh's description of the module of vector-valued Siegel modular forms of weight (k, 2) and an explicit description of the Hecke action on Fourier expansions. (C) 2013 Elsevier Inc. All rights reserved.
Resumo:
Generalized linear mixed models (GLMM) are generalized linear models with normally distributed random effects in the linear predictor. Penalized quasi-likelihood (PQL), an approximate method of inference in GLMMs, involves repeated fitting of linear mixed models with “working” dependent variables and iterative weights that depend on parameter estimates from the previous cycle of iteration. The generality of PQL, and its implementation in commercially available software, has encouraged the application of GLMMs in many scientific fields. Caution is needed, however, since PQL may sometimes yield badly biased estimates of variance components, especially with binary outcomes. Recent developments in numerical integration, including adaptive Gaussian quadrature, higher order Laplace expansions, stochastic integration and Markov chain Monte Carlo (MCMC) algorithms, provide attractive alternatives to PQL for approximate likelihood inference in GLMMs. Analyses of some well known datasets, and simulations based on these analyses, suggest that PQL still performs remarkably well in comparison with more elaborate procedures in many practical situations. Adaptive Gaussian quadrature is a viable alternative for nested designs where the numerical integration is limited to a small number of dimensions. Higher order Laplace approximations hold the promise of accurate inference more generally. MCMC is likely the method of choice for the most complex problems that involve high dimensional integrals.