9 resultados para Unified Transform Kernel
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
Every seismic event produces seismic waves which travel throughout the Earth. Seismology is the science of interpreting measurements to derive information about the structure of the Earth. Seismic tomography is the most powerful tool for determination of 3D structure of deep Earth's interiors. Tomographic models obtained at the global and regional scales are an underlying tool for determination of geodynamical state of the Earth, showing evident correlation with other geophysical and geological characteristics. The global tomographic images of the Earth can be written as a linear combinations of basis functions from a specifically chosen set, defining the model parameterization. A number of different parameterizations are commonly seen in literature: seismic velocities in the Earth have been expressed, for example, as combinations of spherical harmonics or by means of the simpler characteristic functions of discrete cells. With this work we are interested to focus our attention on this aspect, evaluating a new type of parameterization, performed by means of wavelet functions. It is known from the classical Fourier theory that a signal can be expressed as the sum of a, possibly infinite, series of sines and cosines. This sum is often referred as a Fourier expansion. The big disadvantage of a Fourier expansion is that it has only frequency resolution and no time resolution. The Wavelet Analysis (or Wavelet Transform) is probably the most recent solution to overcome the shortcomings of Fourier analysis. The fundamental idea behind this innovative analysis is to study signal according to scale. Wavelets, in fact, are mathematical functions that cut up data into different frequency components, and then study each component with resolution matched to its scale, so they are especially useful in the analysis of non stationary process that contains multi-scale features, discontinuities and sharp strike. Wavelets are essentially used in two ways when they are applied in geophysical process or signals studies: 1) as a basis for representation or characterization of process; 2) as an integration kernel for analysis to extract information about the process. These two types of applications of wavelets in geophysical field, are object of study of this work. At the beginning we use the wavelets as basis to represent and resolve the Tomographic Inverse Problem. After a briefly introduction to seismic tomography theory, we assess the power of wavelet analysis in the representation of two different type of synthetic models; then we apply it to real data, obtaining surface wave phase velocity maps and evaluating its abilities by means of comparison with an other type of parametrization (i.e., block parametrization). For the second type of wavelet application we analyze the ability of Continuous Wavelet Transform in the spectral analysis, starting again with some synthetic tests to evaluate its sensibility and capability and then apply the same analysis to real data to obtain Local Correlation Maps between different model at same depth or between different profiles of the same model.
Resumo:
Machine learning comprises a series of techniques for automatic extraction of meaningful information from large collections of noisy data. In many real world applications, data is naturally represented in structured form. Since traditional methods in machine learning deal with vectorial information, they require an a priori form of preprocessing. Among all the learning techniques for dealing with structured data, kernel methods are recognized to have a strong theoretical background and to be effective approaches. They do not require an explicit vectorial representation of the data in terms of features, but rely on a measure of similarity between any pair of objects of a domain, the kernel function. Designing fast and good kernel functions is a challenging problem. In the case of tree structured data two issues become relevant: kernel for trees should not be sparse and should be fast to compute. The sparsity problem arises when, given a dataset and a kernel function, most structures of the dataset are completely dissimilar to one another. In those cases the classifier has too few information for making correct predictions on unseen data. In fact, it tends to produce a discriminating function behaving as the nearest neighbour rule. Sparsity is likely to arise for some standard tree kernel functions, such as the subtree and subset tree kernel, when they are applied to datasets with node labels belonging to a large domain. A second drawback of using tree kernels is the time complexity required both in learning and classification phases. Such a complexity can sometimes prevents the kernel application in scenarios involving large amount of data. This thesis proposes three contributions for resolving the above issues of kernel for trees. A first contribution aims at creating kernel functions which adapt to the statistical properties of the dataset, thus reducing its sparsity with respect to traditional tree kernel functions. Specifically, we propose to encode the input trees by an algorithm able to project the data onto a lower dimensional space with the property that similar structures are mapped similarly. By building kernel functions on the lower dimensional representation, we are able to perform inexact matchings between different inputs in the original space. A second contribution is the proposal of a novel kernel function based on the convolution kernel framework. Convolution kernel measures the similarity of two objects in terms of the similarities of their subparts. Most convolution kernels are based on counting the number of shared substructures, partially discarding information about their position in the original structure. The kernel function we propose is, instead, especially focused on this aspect. A third contribution is devoted at reducing the computational burden related to the calculation of a kernel function between a tree and a forest of trees, which is a typical operation in the classification phase and, for some algorithms, also in the learning phase. We propose a general methodology applicable to convolution kernels. Moreover, we show an instantiation of our technique when kernels such as the subtree and subset tree kernels are employed. In those cases, Direct Acyclic Graphs can be used to compactly represent shared substructures in different trees, thus reducing the computational burden and storage requirements.
Resumo:
This thesis focuses on studying molecular structure and internal dynamics by using pulsed jet Fourier transform microwave (PJ-FTMW) spectroscopy combined with theoretical calculations. Several kinds of interesting chemical problems are investigated by analyzing the MW spectra of the corresponding molecular systems. First, the general aspects of rotational spectroscopy are summarized, and then the basic theory on molecular rotation and experimental method are described briefly. ab initio and density function theory (DFT) calculations that used in this thesis to assist the assignment of rotational spectrum are also included. From chapter 3 to chapter 8, several molecular systems concerning different kind of general chemical problems are presented. In chapter 3, the conformation and internal motions of dimethyl sulfate are reported. The internal rotations of the two methyl groups split each rotational transition into several components line, allowing for the determination of accurate values of the V3 barrier height to internal rotation and of the orientation of the methyl groups with respect to the principal axis system. In chapter 4 and 5, the results concerning two kinds of carboxylic acid bi-molecules, formed via two strong hydrogen bonds, are presented. This kind of adduct is interesting also because a double proton transfer can easily take place, connecting either two equivalent or two non-equivalent molecular conformations. Chapter 6 concerns a medium strong hydrogen bonded molecular complex of alcohol with ether. The dimer of ethanol-dimethylether was chosen as the model system for this purpose. Chapter 7 focuses on weak halogen…H hydrogen bond interaction. The nature of O-H…F and C-H…Cl interaction has been discussed through analyzing the rotational spectra of CH3CHClF/H2O. In chapter 8, two molecular complexes concerning the halogen bond interaction are presented.
Resumo:
Oceanic islands can be divided, according to their origin, in volcanic and tectonic. Volcanic islands are due to excess volcanism. Tectonic islands are mainly formed due to vertical tectonic motions of blocks of oceanic lithosphere along transverse ridges flanking transform faults at slow and ultraslow mid-ocean ridges. Vertical tectonic motions are due to a reorganization of the geometry of the transform plate boundary, with the transition from a transcurrent tectonics to a transtensive and/or transpressive tectonics, with the formation of the transverse ridges. Tectonic islands can be located also at the ridge–transform intersection: in this case the uplift is due by the movement of the long-lived detachment faults located along the flanks of the mid-ocean ridges. The "Vema" paleoisland (equatorial Atlantic) is at the summit of the southern transverse ridge of the Vema transform. It is now 450 m bsl and it is capped by a carbonate platform 500 m-thick, dated by 87Sr/86Sr at 10 Ma. Three tectonic paleoislands are on the summit of the transverse ridge flanking the Romanche megatrasform (equatorial Atlantic). They are now about 1,000 m bsl and they are formed by 300 m-thick carbonate platforms dated by 87Sr/86Sr, between 11 and 6 Ma. The tectonic paleoisland “Atlantis Bank" is located in the South-Western Indian Ridge, along the Atlantis II transform, and it is today 700 m bsl. The only modern example of oceanic tectonics island is the St. Paul Rocks (equatorial Atlantic), located along the St. Paul transform. This archipelago is the top of a peridotitic massif that it is now a left overstep undergoing transpression. Oceanic volcanic islands are characterized by rapid growth and subsequent thermal subsidence and drowning; in contrast, oceanic tectonic islands may have one or more stages of emersion related to vertical tectonic events along the large oceanic fracture zones.
Resumo:
The aim of this work is to present various aspects of numerical simulation of particle and radiation transport for industrial and environmental protection applications, to enable the analysis of complex physical processes in a fast, reliable, and efficient way. In the first part we deal with speed-up of numerical simulation of neutron transport for nuclear reactor core analysis. The convergence properties of the source iteration scheme of the Method of Characteristics applied to be heterogeneous structured geometries has been enhanced by means of Boundary Projection Acceleration, enabling the study of 2D and 3D geometries with transport theory without spatial homogenization. The computational performances have been verified with the C5G7 2D and 3D benchmarks, showing a sensible reduction of iterations and CPU time. The second part is devoted to the study of temperature-dependent elastic scattering of neutrons for heavy isotopes near to the thermal zone. A numerical computation of the Doppler convolution of the elastic scattering kernel based on the gas model is presented, for a general energy dependent cross section and scattering law in the center of mass system. The range of integration has been optimized employing a numerical cutoff, allowing a faster numerical evaluation of the convolution integral. Legendre moments of the transfer kernel are subsequently obtained by direct quadrature and a numerical analysis of the convergence is presented. In the third part we focus our attention to remote sensing applications of radiative transfer employed to investigate the Earth's cryosphere. The photon transport equation is applied to simulate reflectivity of glaciers varying the age of the layer of snow or ice, its thickness, the presence or not other underlying layers, the degree of dust included in the snow, creating a framework able to decipher spectral signals collected by orbiting detectors.
Resumo:
The first part of this work deals with the inverse problem solution in the X-ray spectroscopy field. An original strategy to solve the inverse problem by using the maximum entropy principle is illustrated. It is built the code UMESTRAT, to apply the described strategy in a semiautomatic way. The application of UMESTRAT is shown with a computational example. The second part of this work deals with the improvement of the X-ray Boltzmann model, by studying two radiative interactions neglected in the current photon models. Firstly it is studied the characteristic line emission due to Compton ionization. It is developed a strategy that allows the evaluation of this contribution for the shells K, L and M of all elements with Z from 11 to 92. It is evaluated the single shell Compton/photoelectric ratio as a function of the primary photon energy. It is derived the energy values at which the Compton interaction becomes the prevailing process to produce ionization for the considered shells. Finally it is introduced a new kernel for the XRF from Compton ionization. In a second place it is characterized the bremsstrahlung radiative contribution due the secondary electrons. The bremsstrahlung radiation is characterized in terms of space, angle and energy, for all elements whit Z=1-92 in the energy range 1–150 keV by using the Monte Carlo code PENELOPE. It is demonstrated that bremsstrahlung radiative contribution can be well approximated with an isotropic point photon source. It is created a data library comprising the energetic distributions of bremsstrahlung. It is developed a new bremsstrahlung kernel which allows the introduction of this contribution in the modified Boltzmann equation. An example of application to the simulation of a synchrotron experiment is shown.
Resumo:
The research project aims to study and develop control techniques for a generalized three-phase and multi-phase electric drive able to efficiently manage most of the drive types available for traction application. The generalized approach is expanded to both linear and non- linear machines in magnetic saturation region starting from experimental flux characterization and applying the general inductance definition. The algorithm is able to manage fragmented drives powered from different batteries or energy sources and will be able to ensure operability even in case of faults in parts of the system. The algorithm was tested using model-in-the-loop in software environment and then applied on experimental test benches with collaboration of an external company.
Resumo:
The aim of this thesis is to discuss and develop the Unified Patent Court project to account for the role it could play in implementing judicial specialisation in the Intellectual Property field. To provide an original contribution to the existing literature on the topic, this work addresses the issue of how the Unified Patent Court could relate to the other forms of judicial specialisation already operating in the European Union context. This study presents a systematic assessment of the not-yet-operational Unified Patent Court within the EU judicial system, which has recently shown a trend towards being developed outside the institutional framework of the European Union Court of Justice. The objective is to understand to what extent the planned implementation of the Unified Patent Court could succeed in responding to the need for specialisation and in being compliant with the EU legal and constitutional framework. Using the Unified Patent Court as a case study, it is argued that specialised courts in the field of Intellectual Property have a significant role to play in the European judicial system and offer an adequate response to the growing complexity of business operations and relations. The significance of this study is to analyse whether the UPC can still be considered as an appropriate solution to unify the European patent litigation system. The research considers the significant deficiencies, which risks having a negative effect on the European Union institutional procedures. In this perspective, this work aims to make a contribution in identifying the potential negative consequences of this reform. It also focuses on considering different alternatives for a European patent system, which could effectively promote innovation in Europe.
Resumo:
We start in Chapter 2 to investigate linear matrix-valued SDEs and the Itô-stochastic Magnus expansion. The Itô-stochastic Magnus expansion provides an efficient numerical scheme to solve matrix-valued SDEs. We show convergence of the expansion up to a stopping time τ and provide an asymptotic estimate of the cumulative distribution function of τ. Moreover, we show how to apply it to solve SPDEs with one and two spatial dimensions by combining it with the method of lines with high accuracy. We will see that the Magnus expansion allows us to use GPU techniques leading to major performance improvements compared to a standard Euler-Maruyama scheme. In Chapter 3, we study a short-rate model in a Cox-Ingersoll-Ross (CIR) framework for negative interest rates. We define the short rate as the difference of two independent CIR processes and add a deterministic shift to guarantee a perfect fit to the market term structure. We show how to use the Gram-Charlier expansion to efficiently calibrate the model to the market swaption surface and price Bermudan swaptions with good accuracy. We are taking two different perspectives for rating transition modelling. In Section 4.4, we study inhomogeneous continuous-time Markov chains (ICTMC) as a candidate for a rating model with deterministic rating transitions. We extend this model by taking a Lie group perspective in Section 4.5, to allow for stochastic rating transitions. In both cases, we will compare the most popular choices for a change of measure technique and show how to efficiently calibrate both models to the available historical rating data and market default probabilities. At the very end, we apply the techniques shown in this thesis to minimize the collateral-inclusive Credit/ Debit Valuation Adjustments under the constraint of small collateral postings by using a collateral account dependent on rating trigger.