989 resultados para Phase-space Methods
Resumo:
Eliminadas las páginas en blanco
Resumo:
Machine learning comprises a series of techniques for automatic extraction of meaningful information from large collections of noisy data. In many real world applications, data is naturally represented in structured form. Since traditional methods in machine learning deal with vectorial information, they require an a priori form of preprocessing. Among all the learning techniques for dealing with structured data, kernel methods are recognized to have a strong theoretical background and to be effective approaches. They do not require an explicit vectorial representation of the data in terms of features, but rely on a measure of similarity between any pair of objects of a domain, the kernel function. Designing fast and good kernel functions is a challenging problem. In the case of tree structured data two issues become relevant: kernel for trees should not be sparse and should be fast to compute. The sparsity problem arises when, given a dataset and a kernel function, most structures of the dataset are completely dissimilar to one another. In those cases the classifier has too few information for making correct predictions on unseen data. In fact, it tends to produce a discriminating function behaving as the nearest neighbour rule. Sparsity is likely to arise for some standard tree kernel functions, such as the subtree and subset tree kernel, when they are applied to datasets with node labels belonging to a large domain. A second drawback of using tree kernels is the time complexity required both in learning and classification phases. Such a complexity can sometimes prevents the kernel application in scenarios involving large amount of data. This thesis proposes three contributions for resolving the above issues of kernel for trees. A first contribution aims at creating kernel functions which adapt to the statistical properties of the dataset, thus reducing its sparsity with respect to traditional tree kernel functions. Specifically, we propose to encode the input trees by an algorithm able to project the data onto a lower dimensional space with the property that similar structures are mapped similarly. By building kernel functions on the lower dimensional representation, we are able to perform inexact matchings between different inputs in the original space. A second contribution is the proposal of a novel kernel function based on the convolution kernel framework. Convolution kernel measures the similarity of two objects in terms of the similarities of their subparts. Most convolution kernels are based on counting the number of shared substructures, partially discarding information about their position in the original structure. The kernel function we propose is, instead, especially focused on this aspect. A third contribution is devoted at reducing the computational burden related to the calculation of a kernel function between a tree and a forest of trees, which is a typical operation in the classification phase and, for some algorithms, also in the learning phase. We propose a general methodology applicable to convolution kernels. Moreover, we show an instantiation of our technique when kernels such as the subtree and subset tree kernels are employed. In those cases, Direct Acyclic Graphs can be used to compactly represent shared substructures in different trees, thus reducing the computational burden and storage requirements.
Resumo:
Thema dieser Arbeit ist die Entwicklung und Kombination verschiedener numerischer Methoden, sowie deren Anwendung auf Probleme stark korrelierter Elektronensysteme. Solche Materialien zeigen viele interessante physikalische Eigenschaften, wie z.B. Supraleitung und magnetische Ordnung und spielen eine bedeutende Rolle in technischen Anwendungen. Es werden zwei verschiedene Modelle behandelt: das Hubbard-Modell und das Kondo-Gitter-Modell (KLM). In den letzten Jahrzehnten konnten bereits viele Erkenntnisse durch die numerische Lösung dieser Modelle gewonnen werden. Dennoch bleibt der physikalische Ursprung vieler Effekte verborgen. Grund dafür ist die Beschränkung aktueller Methoden auf bestimmte Parameterbereiche. Eine der stärksten Einschränkungen ist das Fehlen effizienter Algorithmen für tiefe Temperaturen.rnrnBasierend auf dem Blankenbecler-Scalapino-Sugar Quanten-Monte-Carlo (BSS-QMC) Algorithmus präsentieren wir eine numerisch exakte Methode, die das Hubbard-Modell und das KLM effizient bei sehr tiefen Temperaturen löst. Diese Methode wird auf den Mott-Übergang im zweidimensionalen Hubbard-Modell angewendet. Im Gegensatz zu früheren Studien können wir einen Mott-Übergang bei endlichen Temperaturen und endlichen Wechselwirkungen klar ausschließen.rnrnAuf der Basis dieses exakten BSS-QMC Algorithmus, haben wir einen Störstellenlöser für die dynamische Molekularfeld Theorie (DMFT) sowie ihre Cluster Erweiterungen (CDMFT) entwickelt. Die DMFT ist die vorherrschende Theorie stark korrelierter Systeme, bei denen übliche Bandstrukturrechnungen versagen. Eine Hauptlimitation ist dabei die Verfügbarkeit effizienter Störstellenlöser für das intrinsische Quantenproblem. Der in dieser Arbeit entwickelte Algorithmus hat das gleiche überlegene Skalierungsverhalten mit der inversen Temperatur wie BSS-QMC. Wir untersuchen den Mott-Übergang im Rahmen der DMFT und analysieren den Einfluss von systematischen Fehlern auf diesen Übergang.rnrnEin weiteres prominentes Thema ist die Vernachlässigung von nicht-lokalen Wechselwirkungen in der DMFT. Hierzu kombinieren wir direkte BSS-QMC Gitterrechnungen mit CDMFT für das halb gefüllte zweidimensionale anisotrope Hubbard Modell, das dotierte Hubbard Modell und das KLM. Die Ergebnisse für die verschiedenen Modelle unterscheiden sich stark: während nicht-lokale Korrelationen eine wichtige Rolle im zweidimensionalen (anisotropen) Modell spielen, ist in der paramagnetischen Phase die Impulsabhängigkeit der Selbstenergie für stark dotierte Systeme und für das KLM deutlich schwächer. Eine bemerkenswerte Erkenntnis ist, dass die Selbstenergie sich durch die nicht-wechselwirkende Dispersion parametrisieren lässt. Die spezielle Struktur der Selbstenergie im Impulsraum kann sehr nützlich für die Klassifizierung von elektronischen Korrelationseffekten sein und öffnet den Weg für die Entwicklung neuer Schemata über die Grenzen der DMFT hinaus.
Resumo:
The report reviews the technology of Free-space Optical Communication (FSO) and simulation methods for testing the performance of diverged beam in the technology. In addition to the introduction, the theory of turbulence and its effect over laser is also reviewed. In the simulation revision chapter, on-off keying (OOK) and diverged beam is assumed in the transmitter, and in the receiver, avalanche photodiode (APD) is utilized to convert the photon stream into electron stream. Phase screens are adopted to simulate the effect of turbulence over the phase of the optical beam. Apart from this, the method of data processing is introduced and retrospected. In the summary chapter, there is a general explanation of different beam divergence and their performance.
Resumo:
Most available studies of interconnected matrix porosity of crystalline rocks are based on laboratory investigations; that is, work on samples that have undergone stress relaxation and were affected by drilling and sample preparation. The extrapolation of the results to in situ conditions is therefore associated with considerable uncertainty, and this was the motivation to conduct the ‘in situ Connected Porosity’ experiment at the Grimsel Test Site (Central Swiss Alps). An acrylic resin doped with fluorescent agents was used to impregnate the microporous granitic matrix in situ around an injection borehole, and samples were obtained by overcoring. The 3-D structure of the porespace, represented by microcracks, was studied by U-stage fluorescence microscopy. Petrophysical methods, including the determination of porosity, permeability and P -wave velocity, were also applied. Investigations were conducted both on samples that were impregnated in situ and on non-impregnated samples, so that natural features could be distinguished from artefacts. The investigated deformed granites display complex microcrack populations representing a polyphase deformation at varying conditions. The crack population is dominated by open cleavage cracks in mica and grain boundary cracks. The porosity of non-impregnated samples lies slightly above 1 per cent, which is 2–2.5 times higher than the in situ porosity obtained for impregnated samples. Measurements of seismic velocities (Vp ) on spherical rock samples as a function of confining pressure, spatial direction and water saturation for both non-impregnated and impregnated samples provide further constraints on the distinction between natural and induced crack types. The main conclusions are that (1) an interconnected network of microcracks exists in the whole granitic matrix, irrespective of the distance to ductile and brittle shear zones, and (2) conventional laboratory methods overestimate the matrix porosity. Calculations of contaminant transport through fractured media often rely on matrix diffusion as a retardation mechanism.
Resumo:
Phase-sensitive X-ray imaging shows a high sensitivity towards electron density variations, making it well suited for imaging of soft tissue matter. However, there are still open questions about the details of the image formation process. Here, a framework for numerical simulations of phase-sensitive X-ray imaging is presented, which takes both particle- and wave-like properties of X-rays into consideration. A split approach is presented where we combine a Monte Carlo method (MC) based sample part with a wave optics simulation based propagation part, leading to a framework that takes both particle- and wave-like properties into account. The framework can be adapted to different phase-sensitive imaging methods and has been validated through comparisons with experiments for grating interferometry and propagation-based imaging. The validation of the framework shows that the combination of wave optics and MC has been successfully implemented and yields good agreement between measurements and simulations. This demonstrates that the physical processes relevant for developing a deeper understanding of scattering in the context of phase-sensitive imaging are modelled in a sufficiently accurate manner. The framework can be used for the simulation of phase-sensitive X-ray imaging, for instance for the simulation of grating interferometry or propagation-based imaging.
Resumo:
The population of space debris increased drastically during the last years. These objects have become a great threat for active satellites. Because the relative velocities between space debris and satellites are high, space debris objects may destroy active satellites through collisions. Furthermore, collisions involving massive objects produce large number of fragments leading to significant growth of the space debris population. The long term evolution of the debris population is essentially driven by so-called catastrophic collisions. An effective remediation measure in order to stabilize the population in Low Earth Orbit (LEO) is therefore the removal of large, massive space debris. To remove these objects, not only precise orbits, but also more detailed information about their attitude states will be required. One important property of an object targeted for removal is its spin period, spin axis orientation and their change over time. Rotating objects will produce periodic brightness variations with frequencies which are related to the spin periods. Such a brightness variation over time is called a light curve. Collecting, but also processing light curves is challenging due to several reasons. Light curves may be undersampled, low frequency components due to phase angle and atmospheric extinction changes may be present, and beat frequencies may occur when the rotation period is close to a multiple of the sampling period. Depending on the method which is used to extract the frequencies, also method-specific properties have to be taken into account. The astronomical Institute of the University of Bern (AIUB) light curve database will be introduced, which contains more than 1,300 light curves acquired over more than seven years. We will discuss properties and reliability of different time series analysis methods tested and currently used by AIUB for the light curve processing. Extracted frequencies and reconstructed phases for some interesting targets, e.g. GLONASS satellites, for which also SLR data were available for the period confirmation, will be presented. Finally we will present the reconstructed phase and its evolution over time of a High-Area-to-Mass-Ratio (HAMR) object, which AIUB observed for several years.
Resumo:
We present a remote sensing observational method for the measurement of the spatio-temporal dynamics of ocean waves. Variational techniques are used to recover a coherent space-time reconstruction of oceanic sea states given stereo video imagery. The stereoscopic reconstruction problem is expressed in a variational optimization framework. There, we design an energy functional whose minimizer is the desired temporal sequence of wave heights. The functional combines photometric observations as well as spatial and temporal regularizers. A nested iterative scheme is devised to numerically solve, via 3-D multigrid methods, the system of partial differential equations resulting from the optimality condition of the energy functional. The output of our method is the coherent, simultaneous estimation of the wave surface height and radiance at multiple snapshots. We demonstrate our algorithm on real data collected off-shore. Statistical and spectral analysis are performed. Comparison with respect to an existing sequential method is analyzed.
Resumo:
Métodos estadísticos para análisis de MRI PSIR
Resumo:
The master thesis presents methods for intellectual analysis and visualization 3D EKG in order to increase the efficiency of ECG analysis by extracting additional data. Visualization is presented as part of the signal analysis tasks considered imaging techniques and their mathematical description. Have been developed algorithms for calculating and visualizing the signal attributes are described using mathematical methods and tools for mining signal. The model of patterns searching for comparison purposes of accuracy of methods was constructed, problems of a clustering and classification of data are solved, the program of visualization of data is also developed. This approach gives the largest accuracy in a task of the intellectual analysis that is confirmed in this work. Considered visualization and analysis techniques are also applicable to the multi-dimensional signals of a different kind.
Resumo:
Mode of access: Internet.