987 resultados para N Euclidean algebra
Resumo:
Fuzzy subsets and fuzzy subgroups are basic concepts in fuzzy mathematics. We shall concentrate on fuzzy subgroups dealing with some of their algebraic, topological and complex analytical properties. Explorations are theoretical belonging to pure mathematics. One of our ideas is to show how widely fuzzy subgroups can be used in mathematics, which brings out the wealth of this concept. In complex analysis we focus on Möbius transformations, combining them with fuzzy subgroups in the algebraic and topological sense. We also survey MV spaces with or without a link to fuzzy subgroups. Spectral space is known in MV algebra. We are interested in its topological properties in MV-semilinear space. Later on, we shall study MV algebras in connection with Riemann surfaces. In fact, the Riemann surface as a concept belongs to complex analysis. On the other hand, Möbius transformations form a part of the theory of Riemann surfaces. In general, this work gives a good understanding how it is possible to fit together different fields of mathematics.
Resumo:
This thesis deals with distance transforms which are a fundamental issue in image processing and computer vision. In this thesis, two new distance transforms for gray level images are presented. As a new application for distance transforms, they are applied to gray level image compression. The new distance transforms are both new extensions of the well known distance transform algorithm developed by Rosenfeld, Pfaltz and Lay. With some modification their algorithm which calculates a distance transform on binary images with a chosen kernel has been made to calculate a chessboard like distance transform with integer numbers (DTOCS) and a real value distance transform (EDTOCS) on gray level images. Both distance transforms, the DTOCS and EDTOCS, require only two passes over the graylevel image and are extremely simple to implement. Only two image buffers are needed: The original gray level image and the binary image which defines the region(s) of calculation. No other image buffers are needed even if more than one iteration round is performed. For large neighborhoods and complicated images the two pass distance algorithm has to be applied to the image more than once, typically 3 10 times. Different types of kernels can be adopted. It is important to notice that no other existing transform calculates the same kind of distance map as the DTOCS. All the other gray weighted distance function, GRAYMAT etc. algorithms find the minimum path joining two points by the smallest sum of gray levels or weighting the distance values directly by the gray levels in some manner. The DTOCS does not weight them that way. The DTOCS gives a weighted version of the chessboard distance map. The weights are not constant, but gray value differences of the original image. The difference between the DTOCS map and other distance transforms for gray level images is shown. The difference between the DTOCS and EDTOCS is that the EDTOCS calculates these gray level differences in a different way. It propagates local Euclidean distances inside a kernel. Analytical derivations of some results concerning the DTOCS and the EDTOCS are presented. Commonly distance transforms are used for feature extraction in pattern recognition and learning. Their use in image compression is very rare. This thesis introduces a new application area for distance transforms. Three new image compression algorithms based on the DTOCS and one based on the EDTOCS are presented. Control points, i.e. points that are considered fundamental for the reconstruction of the image, are selected from the gray level image using the DTOCS and the EDTOCS. The first group of methods select the maximas of the distance image to new control points and the second group of methods compare the DTOCS distance to binary image chessboard distance. The effect of applying threshold masks of different sizes along the threshold boundaries is studied. The time complexity of the compression algorithms is analyzed both analytically and experimentally. It is shown that the time complexity of the algorithms is independent of the number of control points, i.e. the compression ratio. Also a new morphological image decompression scheme is presented, the 8 kernels' method. Several decompressed images are presented. The best results are obtained using the Delaunay triangulation. The obtained image quality equals that of the DCT images with a 4 x 4
Resumo:
Phenomena with a constrained sample space appear frequently in practice. This is the case e.g. with strictly positive data, or with compositional data, like percentages or proportions. If the natural measure of difference is not the absolute one, simple algebraic properties show that it is more convenient to work with a geometry different from the usual Euclidean geometry in real space, and with a measure different from the usual Lebesgue measure, leading to alternative models which better fit the phenomenon under study. The general approach is presented and illustrated using the normal distribution, both on the positive real line and on the D-part simplex. The original ideas of McAlister in his introduction to the lognormal distribution in 1879, are recovered and updated
Resumo:
A novel Fe3+-selective and turn-on fluorescent probe 1 incorporating a rhodamine fluorophore and quinoline subunit was synthesized. Probe 1 displayed high selectivity for Fe3+ in CH3CN–H2O (95:5 v/v) in the presence of other relevant metal cations. Interaction with Fe3+ in 1:1 stoichiometry could trigger a significant fluorescence enhancement due to the formation of the ring-open form. The fluorescent response images were investigated by a novel Euclidean distance method based on red, green, and blue values. A linear relationship was observed between fluorescence intensity changes and Fe3+ concentrations from 7.3 × 10−7 to 3.6 × 10−5 mol L−1.
Resumo:
This master thesis investigates the moduli of families of curves and the capacities of the Gr¨otzsch and Teichm¨uller rings, which are applied in the main parts of this master thesis. The extremal properties of these rings are discussed in connection with the spherical symmetrization. Applications are given to the study of distortion of quasiconformal maps in the euclidean n-dimensional space.
Resumo:
In this Licentiate thesis we investigate the absolute ratio δ, j, ˜j and hyperbolic ρ metrics and their relations with each other. Various growth estimates are given for quasiconformal mpas both in plane and space. Some Hölder constants were refined with respect δ, j ˜j metrics. Some new results regarding the Hölder continuity of quasiconformal and quasiregular mapping of unit ball with respect to Euclidean and hyperbolic metrics are given, which were obtained by many authors in 1980’s. Applications are given to the study of metric space, quasiconformal and quasiregular maps in the plane and as well as in the space.
Resumo:
Whenever a spacecraft is launched it is essential that the algorithms in the on-board software systems and at ground control are efficient and reliable over extended periods of time. Geometric numerical integrators, and in particular variational integrators, have both these characteristics. In "Numerics of Spacecraft Dynamics" new numerical integrators are presented and analysed in depth. These algorithms have been designed specifically for the dynamics of spacecraft and artificial satellites in Earth orbits. Full analytical solutions to a class of integrable deformations of the two-body problem in classical mechanics are derived, and a systematic method to compute variational integrators to arbitrary order with a computer algebra system is introduced.
Resumo:
This PhD thesis in Mathematics belongs to the field of Geometric Function Theory. The thesis consists of four original papers. The topic studied deals with quasiconformal mappings and their distortion theory in Euclidean n-dimensional spaces. This theory has its roots in the pioneering papers of F. W. Gehring and J. Väisälä published in the early 1960’s and it has been studied by many mathematicians thereafter. In the first paper we refine the known bounds for the so-called Mori constant and also estimate the distortion in the hyperbolic metric. The second paper deals with radial functions which are simple examples of quasiconformal mappings. These radial functions lead us to the study of the so-called p-angular distance which has been studied recently e.g. by L. Maligranda and S. Dragomir. In the third paper we study a class of functions of a real variable studied by P. Lindqvist in an influential paper. This leads one to study parametrized analogues of classical trigonometric and hyperbolic functions which for the parameter value p = 2 coincide with the classical functions. Gaussian hypergeometric functions have an important role in the study of these special functions. Several new inequalities and identities involving p-analogues of these functions are also given. In the fourth paper we study the generalized complete elliptic integrals, modular functions and some related functions. We find the upper and lower bounds of these functions, and those bounds are given in a simple form. This theory has a long history which goes back two centuries and includes names such as A. M. Legendre, C. Jacobi, C. F. Gauss. Modular functions also occur in the study of quasiconformal mappings. Conformal invariants, such as the modulus of a curve family, are often applied in quasiconformal mapping theory. The invariants can be sometimes expressed in terms of special conformal mappings. This fact explains why special functions often occur in this theory.
Resumo:
Machine learning provides tools for automated construction of predictive models in data intensive areas of engineering and science. The family of regularized kernel methods have in the recent years become one of the mainstream approaches to machine learning, due to a number of advantages the methods share. The approach provides theoretically well-founded solutions to the problems of under- and overfitting, allows learning from structured data, and has been empirically demonstrated to yield high predictive performance on a wide range of application domains. Historically, the problems of classification and regression have gained the majority of attention in the field. In this thesis we focus on another type of learning problem, that of learning to rank. In learning to rank, the aim is from a set of past observations to learn a ranking function that can order new objects according to how well they match some underlying criterion of goodness. As an important special case of the setting, we can recover the bipartite ranking problem, corresponding to maximizing the area under the ROC curve (AUC) in binary classification. Ranking applications appear in a large variety of settings, examples encountered in this thesis include document retrieval in web search, recommender systems, information extraction and automated parsing of natural language. We consider the pairwise approach to learning to rank, where ranking models are learned by minimizing the expected probability of ranking any two randomly drawn test examples incorrectly. The development of computationally efficient kernel methods, based on this approach, has in the past proven to be challenging. Moreover, it is not clear what techniques for estimating the predictive performance of learned models are the most reliable in the ranking setting, and how the techniques can be implemented efficiently. The contributions of this thesis are as follows. First, we develop RankRLS, a computationally efficient kernel method for learning to rank, that is based on minimizing a regularized pairwise least-squares loss. In addition to training methods, we introduce a variety of algorithms for tasks such as model selection, multi-output learning, and cross-validation, based on computational shortcuts from matrix algebra. Second, we improve the fastest known training method for the linear version of the RankSVM algorithm, which is one of the most well established methods for learning to rank. Third, we study the combination of the empirical kernel map and reduced set approximation, which allows the large-scale training of kernel machines using linear solvers, and propose computationally efficient solutions to cross-validation when using the approach. Next, we explore the problem of reliable cross-validation when using AUC as a performance criterion, through an extensive simulation study. We demonstrate that the proposed leave-pair-out cross-validation approach leads to more reliable performance estimation than commonly used alternative approaches. Finally, we present a case study on applying machine learning to information extraction from biomedical literature, which combines several of the approaches considered in the thesis. The thesis is divided into two parts. Part I provides the background for the research work and summarizes the most central results, Part II consists of the five original research articles that are the main contribution of this thesis.
Resumo:
The assembly and maintenance of the International Thermonuclear Experimental Reactor (ITER) vacuum vessel (VV) is highly challenging since the tasks performed by the robot involve welding, material handling, and machine cutting from inside the VV. The VV is made of stainless steel, which has poor machinability and tends to work harden very rapidly, and all the machining operations need to be carried out from inside of the ITER VV. A general industrial robot cannot be used due to its poor stiffness in the heavy duty machining process, and this will cause many problems, such as poor surface quality, tool damage, low accuracy. Therefore, one of the most suitable options should be a light weight mobile robot which is able to move around inside of the VV and perform different machining tasks by replacing different cutting tools. Reducing the mass of the robot manipulators offers many advantages: reduced material costs, reduced power consumption, the possibility of using smaller actuators, and a higher payload-to-robot weight ratio. Offsetting these advantages, the lighter weight robot is more flexible, which makes it more difficult to control. To achieve good machining surface quality, the tracking of the end effector must be accurate, and an accurate model for a more flexible robot must be constructed. This thesis studies the dynamics and control of a 10 degree-of-freedom (DOF) redundant hybrid robot (4-DOF serial mechanism and 6-DOF 6-UPS hexapod parallel mechanisms) hydraulically driven with flexible rods under the influence of machining forces. Firstly, the flexibility of the bodies is described using the floating frame of reference method (FFRF). A finite element model (FEM) provided the Craig-Bampton (CB) modes needed for the FFRF. A dynamic model of the system of six closed loop mechanisms was assembled using the constrained Lagrange equations and the Lagrange multiplier method. Subsequently, the reaction forces between the parallel and serial parts were used to study the dynamics of the serial robot. A PID control based on position predictions was implemented independently to control the hydraulic cylinders of the robot. Secondly, in machining, to achieve greater end effector trajectory tracking accuracy for surface quality, a robust control of the actuators for the flexible link has to be deduced. This thesis investigates the intelligent control of a hydraulically driven parallel robot part based on the dynamic model and two schemes of intelligent control for a hydraulically driven parallel mechanism based on the dynamic model: (1) a fuzzy-PID self-tuning controller composed of the conventional PID control and with fuzzy logic, and (2) adaptive neuro-fuzzy inference system-PID (ANFIS-PID) self-tuning of the gains of the PID controller, which are implemented independently to control each hydraulic cylinder of the parallel mechanism based on rod length predictions. The serial component of the hybrid robot can be analyzed using the equilibrium of reaction forces at the universal joint connections of the hexa-element. To achieve precise positional control of the end effector for maximum precision machining, the hydraulic cylinder should be controlled to hold the hexa-element. Thirdly, a finite element approach of multibody systems using the Special Euclidean group SE(3) framework is presented for a parallel mechanism with flexible piston rods under the influence of machining forces. The flexibility of the bodies is described using the nonlinear interpolation method with an exponential map. The equations of motion take the form of a differential algebraic equation on a Lie group, which is solved using a Lie group time integration scheme. The method relies on the local description of motions, so that it provides a singularity-free formulation, and no parameterization of the nodal variables needs to be introduced. The flexible slider constraint is formulated using a Lie group and used for modeling a flexible rod sliding inside a cylinder. The dynamic model of the system of six closed loop mechanisms was assembled using Hamilton’s principle and the Lagrange multiplier method. A linearized hydraulic control system based on rod length predictions was implemented independently to control the hydraulic cylinders. Consequently, the results of the simulations demonstrating the behavior of the robot machine are presented for each case study. In conclusion, this thesis studies the dynamic analysis of a special hybrid (serialparallel) robot for the above-mentioned special task involving the ITER and investigates different control algorithms that can significantly improve machining performance. These analyses and results provide valuable insight into the design and control of the parallel robot with flexible rods.
Virtual Testing of Active Magnetic Bearing Systems based on Design Guidelines given by the Standards
Resumo:
Active Magnetic Bearings offer many advantages that have brought new applications to the industry. However, similarly to all new technology, active magnetic bearings also have downsides and one of those is the low standardization level. This thesis is studying mainly the ISO 14839 standard and more specifically the system verification methods. These verifying methods are conducted using a practical test with an existing active magnetic bearing system. The system is simulated with Matlab using rotor-bearing dynamics toolbox, but this study does not include the exact simulation code or a direct algebra calculation. However, this study provides the proof that standardized simulation methods can be applied in practical problems.
Resumo:
Identification of low-dimensional structures and main sources of variation from multivariate data are fundamental tasks in data analysis. Many methods aimed at these tasks involve solution of an optimization problem. Thus, the objective of this thesis is to develop computationally efficient and theoretically justified methods for solving such problems. Most of the thesis is based on a statistical model, where ridges of the density estimated from the data are considered as relevant features. Finding ridges, that are generalized maxima, necessitates development of advanced optimization methods. An efficient and convergent trust region Newton method for projecting a point onto a ridge of the underlying density is developed for this purpose. The method is utilized in a differential equation-based approach for tracing ridges and computing projection coordinates along them. The density estimation is done nonparametrically by using Gaussian kernels. This allows application of ridge-based methods with only mild assumptions on the underlying structure of the data. The statistical model and the ridge finding methods are adapted to two different applications. The first one is extraction of curvilinear structures from noisy data mixed with background clutter. The second one is a novel nonlinear generalization of principal component analysis (PCA) and its extension to time series data. The methods have a wide range of potential applications, where most of the earlier approaches are inadequate. Examples include identification of faults from seismic data and identification of filaments from cosmological data. Applicability of the nonlinear PCA to climate analysis and reconstruction of periodic patterns from noisy time series data are also demonstrated. Other contributions of the thesis include development of an efficient semidefinite optimization method for embedding graphs into the Euclidean space. The method produces structure-preserving embeddings that maximize interpoint distances. It is primarily developed for dimensionality reduction, but has also potential applications in graph theory and various areas of physics, chemistry and engineering. Asymptotic behaviour of ridges and maxima of Gaussian kernel densities is also investigated when the kernel bandwidth approaches infinity. The results are applied to the nonlinear PCA and to finding significant maxima of such densities, which is a typical problem in visual object tracking.
Resumo:
Tässä työssä johdetaan lineaarimuunnoksella CIE x y z-värinsovitusfunktioista uudet värinsovitusfunktiot. Tarvittava muunnosmatriisi etsitään optimoimalla CIE ja BFD-RIT värieroellipsejä Matlab-ympäristössä. Työn tuloksena saatiin muunnosmatriisi, ja sillä muunnetut uudet värinsovitusfunktiot ja CIELAB-tyyppinen väriavaruus. Euklidisella etäisyydellä mitattuna CIE ja BFD-RIT värieroellipsien muoto ja koko paranivat noin kolmanneksen, mikä oli myös tavoitteena.
Resumo:
Tässä työssä esitetään venäläisen matemaatikon A.I. Shirshovin teorioita ja tuloksia sanojen kombinatoriikasta. Lisäksi näytetään miten ne soveltuvat PI-algebrojen maailmaan. Shirshovin tuloksia tarkasteltaessa käsitellään sanoja erillisinä kombinatorisina objekteina ja todistetaan Shirshovin Lemma, joka on tämän työn perusta. Lemmanmukaan tarpeeksi pitkille sanoille saadaan tiettyä säännönmukaisuutta ja se todistetaan kolme kertaa. Ensimmäisestä saadaan tarpeeksi pitkän sanan olemassaolo.Toinen todistus mukailee Shirshovin alkuperäistä todistusta. Kolmannessa todistuksessa annetaan tarpeeksi pitkälle sanalle paremmin käytäntöön soveltuva raja. Tämän jälkeen käsitellään sanoja algebrallisina objekteina. Työn päätuloksena todistetaan Shirshovin Korkeuslause, jonka mukaan jokainen äärellisesti generoidunPI-algebran alkio on sanojen ω1k1 ···ωdkd lineaarikombinaatio, missä sanojen ωi pi-tuudet sekä indeksi i ovat rajatut. Shirshovin Korkeuslauseesta seuraa suoraan positiivinen ratkaisu Kurochin ongelmaan PI-algebroilla sekä saadaan raja alkioiden lukumäärälle, jolla algebra generoituu moduliksi. Lisäksi esitetään toisena sovelluksena ilman todistuksia Shirshovin soveltuvuus Jacobsonin radikaalin nilpotenttisuuteen. Pääsääntöisenä lähteenä käytetään A. Kanel-Belowin ja L. H. Rowenin kirjaa: Computational aspects of polynomial identities.
Resumo:
Optimization of quantum measurement processes has a pivotal role in carrying out better, more accurate or less disrupting, measurements and experiments on a quantum system. Especially, convex optimization, i.e., identifying the extreme points of the convex sets and subsets of quantum measuring devices plays an important part in quantum optimization since the typical figures of merit for measuring processes are affine functionals. In this thesis, we discuss results determining the extreme quantum devices and their relevance, e.g., in quantum-compatibility-related questions. Especially, we see that a compatible device pair where one device is extreme can be joined into a single apparatus essentially in a unique way. Moreover, we show that the question whether a pair of quantum observables can be measured jointly can often be formulated in a weaker form when some of the observables involved are extreme. Another major line of research treated in this thesis deals with convex analysis of special restricted quantum device sets, covariance structures or, in particular, generalized imprimitivity systems. Some results on the structure ofcovariant observables and instruments are listed as well as results identifying the extreme points of covariance structures in quantum theory. As a special case study, not published anywhere before, we study the structure of Euclidean-covariant localization observables for spin-0-particles. We also discuss the general form of Weyl-covariant phase-space instruments. Finally, certain optimality measures originating from convex geometry are introduced for quantum devices, namely, boundariness measuring how ‘close’ to the algebraic boundary of the device set a quantum apparatus is and the robustness of incompatibility quantifying the level of incompatibility for a quantum device pair by measuring the highest amount of noise the pair tolerates without becoming compatible. Boundariness is further associated to minimum-error discrimination of quantum devices, and robustness of incompatibility is shown to behave monotonically under certain compatibility-non-decreasing operations. Moreover, the value of robustness of incompatibility is given for a few special device pairs.