36 resultados para Semigroups of linear operators
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
A steady increase in practical industrial applications has secured a place for linear motors. They provide high dynamics and high positioning accuracy of the motor, high reliability and durability of all components of the system. Machines with linear motors have very big perspectives in modern industry. This thesis enables to understand what a linear motor is, where they are used and what situation there is on their market nowadays. It can help to understand reasonability of applying linear motors on manufacture and benefits of its application.
Resumo:
This dissertation describes a networking approach to infinite-dimensional systems theory, where there is a minimal distinction between inputs and outputs. We introduce and study two closely related classes of systems, namely the state/signal systems and the port-Hamiltonian systems, and describe how they relate to each other. Some basic theory for these two classes of systems and the interconnections of such systems is provided. The main emphasis lies on passive and conservative systems, and the theoretical concepts are illustrated using the example of a lossless transfer line. Much remains to be done in this field and we point to some directions for future studies as well.
Resumo:
The objective of this thesis is to develop and generalize further the differential evolution based data classification method. For many years, evolutionary algorithms have been successfully applied to many classification tasks. Evolution algorithms are population based, stochastic search algorithms that mimic natural selection and genetics. Differential evolution is an evolutionary algorithm that has gained popularity because of its simplicity and good observed performance. In this thesis a differential evolution classifier with pool of distances is proposed, demonstrated and initially evaluated. The differential evolution classifier is a nearest prototype vector based classifier that applies a global optimization algorithm, differential evolution, to determine the optimal values for all free parameters of the classifier model during the training phase of the classifier. The differential evolution classifier applies the individually optimized distance measure for each new data set to be classified is generalized to cover a pool of distances. Instead of optimizing a single distance measure for the given data set, the selection of the optimal distance measure from a predefined pool of alternative measures is attempted systematically and automatically. Furthermore, instead of only selecting the optimal distance measure from a set of alternatives, an attempt is made to optimize the values of the possible control parameters related with the selected distance measure. Specifically, a pool of alternative distance measures is first created and then the differential evolution algorithm is applied to select the optimal distance measure that yields the highest classification accuracy with the current data. After determining the optimal distance measures for the given data set together with their optimal parameters, all determined distance measures are aggregated to form a single total distance measure. The total distance measure is applied to the final classification decisions. The actual classification process is still based on the nearest prototype vector principle; a sample belongs to the class represented by the nearest prototype vector when measured with the optimized total distance measure. During the training process the differential evolution algorithm determines the optimal class vectors, selects optimal distance metrics, and determines the optimal values for the free parameters of each selected distance measure. The results obtained with the above method confirm that the choice of distance measure is one of the most crucial factors for obtaining higher classification accuracy. The results also demonstrate that it is possible to build a classifier that is able to select the optimal distance measure for the given data set automatically and systematically. After finding optimal distance measures together with optimal parameters from the particular distance measure results are then aggregated to form a total distance, which will be used to form the deviation between the class vectors and samples and thus classify the samples. This thesis also discusses two types of aggregation operators, namely, ordered weighted averaging (OWA) based multi-distances and generalized ordered weighted averaging (GOWA). These aggregation operators were applied in this work to the aggregation of the normalized distance values. The results demonstrate that a proper combination of aggregation operator and weight generation scheme play an important role in obtaining good classification accuracy. The main outcomes of the work are the six new generalized versions of previous method called differential evolution classifier. All these DE classifier demonstrated good results in the classification tasks.
Resumo:
The Finnish Securities Markets are being harmonized to enable better, more reliable and timely settlement of securities. Omnibus accounts are a common practice in the European securities markets. Finland forbids the use of omnibus accounts from its domestic investors. There is a possibility that the omnibus account usage is allowed for Finnish investors in the future. This study aims to build a comprehensive image to Finnish investors and account operators in determining the costs and benefits that the omnibus account structure would have for them. This study uses qualitative research methods. A literature review provides the framework for this study. Different kinds of research articles, regulatory documents, studies performed by European organisations, and Finnish news reportages are used to analyse the costs and benefits of omnibus accounts. The viewpoint is strictly of account operators and investors, and different effects on them are contemplated. The results of the analysis show that there are a number of costs and benefits that investors and account operators must take into consideration regarding omnibus accounts. The costs are related to development of IT-systems so that participants are able to adapt to the new structure and operate according to its needs. Decrease in the holdings’ transparency is a disadvantage of the structure and needs to be assessed precisely to avoid some problems it might bring. Benefits are mostly related to the increased competition in the securities markets as well as to the possible cost reductions of securities settlement. The costs and benefits were analysed according to the study plan of this thesis and as a result, the significance and impact of omnibus accounts to Finnish investors and account operators depends on the competition level and the decisions that all market participants make when determining if the account structure is beneficial for their operations.
Resumo:
Vaatimus kuvatiedon tiivistämisestä on tullut entistä ilmeisemmäksi viimeisen kymmenen vuoden aikana kuvatietoon perustuvien sovellutusten myötä. Nykyisin kiinnitetään erityistä huomiota spektrikuviin, joiden tallettaminen ja siirto vaativat runsaasti levytilaa ja kaistaa. Aallokemuunnos on osoittautunut hyväksi ratkaisuksi häviöllisessä tiedontiivistämisessä. Sen toteutus alikaistakoodauksessa perustuu aallokesuodattimiin ja ongelmana on sopivan aallokesuodattimen valinta erilaisille tiivistettäville kuville. Tässä työssä esitetään katsaus tiivistysmenetelmiin, jotka perustuvat aallokemuunnokseen. Ortogonaalisten suodattimien määritys parametrisoimalla on työn painopisteenä. Työssä todetaan myös kahden erilaisen lähestymistavan samanlaisuus algebrallisten yhtälöiden avulla. Kokeellinen osa sisältää joukon testejä, joilla perustellaan parametrisoinnin tarvetta. Erilaisille kuville tarvitaan erilaisia suodattimia sekä erilaiset tiivistyskertoimet saavutetaan eri suodattimilla. Lopuksi toteutetaan spektrikuvien tiivistys aallokemuunnoksen avulla.
Resumo:
Tutkimuksen tavoitteena oli selvittää miten Bluetooth-teknologia voi vaikuttaa ICT-alan arvoverkkoon ja alan toimijoiden rooleihin. Tutkimuksessa käytettiin kvalitatiivisia menetelmiä ja siinä oli piirteitä tulevaisuuden tutkimuksesta, koska myös eksploratiivisia menetelmiä käytettiin. Tutkimus perustui laajalti kirjallisuuteen ja artikkeleihin, joiden perusteella muodostettiin skenaarioita ICT-alan tulevaisuuden arvoverkosta. Tutkimus sisälsi myös empiirisen osion, jossa järjestettiin kaksi ryhmäkeskustelua, joissa alan toimijat keskustelivat muodostetuista skenaarioista sekä yleisemmin Bluetoothin vaikutuksista arvoverkkoon. Tutkimuksen mukaan ICT-alan arvoverkko tulevaisuudessa on rakenteeltaan strateginen arvoverkko, jota johtaa eri toimija erilaisissa markkinatilanteissa. Verkon strateginen keskus voi muuttua ajan kuluessa kun Bluetoothin sovelluskohteet lisääntyvät ja se voi vaihdella markkina-alueesta toiseen. Bluetooth luo uuden kommunikaatiokanavan nykyisten rinnalle ja se voi paikallisesti korvata nykyisten kanavien käytön tiedon siirrossa. Bluetooth voi synnyttää lukuisia uusia liiketoimintamahdollisuuksia ja sen avulla voidaan tuottaa lisäarvopalveluita. Suurimpien muutosten voidaan olettaa koskevan teleoperaattoreiden ja sisällöntuottajien liiketoimintaa.
Resumo:
This thesis concentrates on studying the operational disturbance behavior of machine tools integrated into FMS. Operational disturbances are short term failures of machine tools which are especially disruptive to unattended or unmanned operation of FMS. The main objective was to examine the effect of operational disturbances on reliability and operation time distribution for machine tools. The theoretical part of the thesis covers the fimdamentals of FMS relating to the subject of this study. The concept of FMS, its benefits and operator's role in FMS operation are reviewed. The importance of reliability is presented. The terms describing the operation time of machine tools are formed by adopting standards and references. The concept of failure and indicators describing reliability and operational performance for machine tools in FMSs are presented. The empirical part of the thesis describes the research methodology which is a combination of automated (ADC) and manual data collection. By using this methodology it is possible to have a complete view of the operation time distribution for studied machine tools. Data collection was carried out in four FMSs consisting of a total of 17 machine tools. Each FMS's basic features and the signals of ADC are described. The indicators describing the reliability and operation time distribution of machine tools were calculated according to collected data. The results showed that operational disturbances have a significant influence on machine tool reliability and operational performance. On average, an operational disturbance occurs every 8,6 hours of operation time and has a down time of 0,53 hours. Operational disturbances cause a 9,4% loss in operation time which is twice the amount of losses caused by technical failures (4,3%). Operational disturbances have a decreasing influence on the utilization rate. A poor operational disturbance behavior decreases the utilization rate. It was found that the features of a part family to be machined and the method technology related to it are defining the operational disturbance behavior of the machine tool. Main causes for operational disturbances were related to material quality variations, tool maintenance, NC program errors, ATC and machine tool control. Operator's role was emphasized. It was found that failure recording activity of the operators correlates with the utilization rate. The more precisely the operators record the failure, the higher is the utilization rate. Also the FMS organizations which record failures more precisely have fewer operational disturbances.
Resumo:
In this dissertation, active galactic nuclei (AGN) are discussed, as they are seen with the high-resolution radio-astronomical technique called Very Long Baseline Interferometry (VLBI). This observational technique provides very high angular resolution (_ 10−300 = 1 milliarcsecond). VLBI observations, performed at different radio frequencies (multi-frequency VLBI), allow to penetrate deep into the core of an AGN to reveal an otherwise obscured inner part of the jet and the vicinity of the AGN’s central engine. Multi-frequency VLBI data are used to scrutinize the structure and evolution of the jet, as well as the distribution of the polarized emission. These data can help to derive the properties of the plasma and the magnetic field, and to provide constraints to the jet composition and the parameters of emission mechanisms. Also VLBI data can be used for testing the possible physical processes in the jet by comparing observational results with results of numerical simulations. The work presented in this thesis contributes to different aspects of AGN physics studies, as well as to the methodology of VLBI data reduction. In particular, Paper I reports evidence of optical and radio emission of AGN coming from the same region in the inner jet. This result was obtained via simultaneous observations of linear polarization in the optical and in radio using VLBI technique of a sample of AGN. Papers II and III describe, in detail, the jet kinematics of the blazar 0716+714, based on multi-frequency data, and reveal a peculiar kinematic pattern: plasma in the inner jet appears to move substantially faster that that in the large-scale jet. This peculiarity is explained by the jet bending, in Paper III. Also, Paper III presents a test of the new imaging technique for VLBI data, the Generalized Maximum Entropy Method (GMEM), with the observed (not simulated) data and compares its results with the conventional imaging. Papers IV and V report the results of observations of the circularly polarized (CP) emission in AGN at small spatial scales. In particular, Paper IV presents values of the core CP for 41 AGN at 15, 22 and 43 GHz, obtained with the help of the standard Gain transfer (GT) method, which was previously developed by D. Homan and J.Wardle for the calibration of multi-source VLBI observations. This method was developed for long multi-source observations, when many AGN are observed in a single VLBI run. In contrast, in Paper V, an attempt is made to apply the GT method to single-source VLBI observations. In such observations, the object list would include only a few sources: a target source and two or three calibrators, and it lasts much shorter than the multi-source experiment. For the CP calibration of a single-source observation, it is necessary to have a source with zero or known CP as one of the calibrators. If the archival observations included such a source to the list of calibrators, the GT could also be used for the archival data, increasing a list of known AGN with the CP at small spatial scale. Paper V contains also calculation of contributions of different sourced of errors to the uncertainty of the final result, and presents the first results for the blazar 0716+714.
Resumo:
En option är ett finansiellt kontrakt som ger dess innehavare en rättighet (men medför ingen skyldighet) att sälja eller köpa någonting (till exempel en aktie) till eller från säljaren av optionen till ett visst pris vid en bestämd tidpunkt i framtiden. Den som säljer optionen binder sig till att gå med på denna framtida transaktion ifall optionsinnehavaren längre fram bestämmer sig för att inlösa optionen. Säljaren av optionen åtar sig alltså en risk av att den framtida transaktion som optionsinnehavaren kan tvinga honom att göra visar sig vara ofördelaktig för honom. Frågan om hur säljaren kan skydda sig mot denna risk leder till intressanta optimeringsproblem, där målet är att hitta en optimal skyddsstrategi under vissa givna villkor. Sådana optimeringsproblem har studerats mycket inom finansiell matematik. Avhandlingen "The knapsack problem approach in solving partial hedging problems of options" inför en ytterligare synpunkt till denna diskussion: I en relativt enkel (ändlig och komplett) marknadsmodell kan nämligen vissa partiella skyddsproblem beskrivas som så kallade kappsäcksproblem. De sistnämnda är välkända inom en gren av matematik som heter operationsanalys. I avhandlingen visas hur skyddsproblem som tidigare lösts på andra sätt kan alternativt lösas med hjälp av metoder som utvecklats för kappsäcksproblem. Förfarandet tillämpas även på helt nya skyddsproblem i samband med så kallade amerikanska optioner.
Resumo:
The purpose of this thesis is twofold. The first and major part is devoted to sensitivity analysis of various discrete optimization problems while the second part addresses methods applied for calculating measures of solution stability and solving multicriteria discrete optimization problems. Despite numerous approaches to stability analysis of discrete optimization problems two major directions can be single out: quantitative and qualitative. Qualitative sensitivity analysis is conducted for multicriteria discrete optimization problems with minisum, minimax and minimin partial criteria. The main results obtained here are necessary and sufficient conditions for different stability types of optimal solutions (or a set of optimal solutions) of the considered problems. Within the framework of quantitative direction various measures of solution stability are investigated. A formula for a quantitative characteristic called stability radius is obtained for the generalized equilibrium situation invariant to changes of game parameters in the case of the H¨older metric. Quality of the problem solution can also be described in terms of robustness analysis. In this work the concepts of accuracy and robustness tolerances are presented for a strategic game with a finite number of players where initial coefficients (costs) of linear payoff functions are subject to perturbations. Investigation of stability radius also aims to devise methods for its calculation. A new metaheuristic approach is derived for calculation of stability radius of an optimal solution to the shortest path problem. The main advantage of the developed method is that it can be potentially applicable for calculating stability radii of NP-hard problems. The last chapter of the thesis focuses on deriving innovative methods based on interactive optimization approach for solving multicriteria combinatorial optimization problems. The key idea of the proposed approach is to utilize a parameterized achievement scalarizing function for solution calculation and to direct interactive procedure by changing weighting coefficients of this function. In order to illustrate the introduced ideas a decision making process is simulated for three objective median location problem. The concepts, models, and ideas collected and analyzed in this thesis create a good and relevant grounds for developing more complicated and integrated models of postoptimal analysis and solving the most computationally challenging problems related to it.
Resumo:
This thesis examines the impact of foreign exchange rate volatility to the extent of use of foreign currency derivatives. Especially the focus is on the impacts of 2008 global financial crisis. The crisis increased risk level in the capital markets greatly. The change in the currency derivatives use is analyzed by comparing means between different periods and in addition, by linear regression that enables to analyze the explanatory power of the model. The research data consists of financial statements figures from fiscal years 2006-2011 published by firms operating in traditional Finnish industrial sectors. Volatilities of the chosen three currency pairs is calculated from the daily fixing rates of ECB. Based on the volatility the sample period is divided into three sub-periods. The results suggest that increased FX market volatility did not increase the use foreign currency derivatives. Furthermore, the increased foreign exchange rate volatility did not increase the power of linear regression model to estimate the use foreign currency derivatives compared to previous studies.
Resumo:
At the present work the bifurcational behaviour of the solutions of Rayleigh equation and corresponding spatially distributed system is being analysed. The conditions of oscillatory and monotonic loss of stability are obtained. In the case of oscillatory loss of stability, the analysis of linear spectral problem is being performed. For nonlinear problem, recurrent formulas for the general term of the asymptotic approximation of the self-oscillations are found, the stability of the periodic mode is analysed. Lyapunov-Schmidt method is being used for asymptotic approximation. The correlation between periodic solutions of ODE and PDE is being investigated. The influence of the diffusion on the frequency of self-oscillations is being analysed. Several numerical experiments are being performed in order to support theoretical findings.
Resumo:
This thesis focuses on the molecular mechanisms regulating the photosynthetic electron transfer reactions upon changes in light intensity. To investigate these mechanisms, I used mutants of the model plant Arabidopsis thaliana impaired in various aspects of regulation of the photosynthetic light reactions. These included mutants of photosystem II (PSII) and light harvesting complex II (LHCII) phosphorylation (stn7 and stn8), mutants of energy-dependent non-photochemical quenching (NPQ) (npq1 and npq4) and of regulation of photosynthetic electron transfer (pgr5). All of these processes have been extensively investigated during the past decades, mainly on plants growing under steady-state conditions, and therefore many aspects of acclimation processes may have been neglected. In this study, plants were grown under fluctuating light, i.e. the alternation of low and high intensities of light, in order to maximally challenge the photosynthetic regulatory mechanisms. In pgr5 and stn7 mutants, the growth in fluctuating light condition mainly damaged PSI while PSII was rather unaffected. It is shown that the PGR5 protein regulates the linear electron transfer: it is essential for the induction of transthylakoid ΔpH that, in turn, activates energy-dependent NPQ and downregulates the activity of cytochrome b6f. This regulation was shown to be essential for the photoprotection of PSI under fluctuations in light intensity. The stn7 mutants were able to acclimate under constant growth light conditions by modulating the PSII/PSI ratio, while under fluctuating growth light they failed in implementing this acclimation strategy. LHCII phosphorylation ensures the balance of the excitation energy distribution between PSII and PSI by increasing the probability for excitons to be trapped by PSI. LHCII can be phosphorylated over all of the thylakoid membrane (grana cores as well as stroma lamellae) and when phosphorylated it constitutes a common antenna for PSII and PSI. Moreover, LHCII was shown to work as a functional bridge that allows the energy transfer between PSII units in grana cores and between PSII and PSI centers in grana margins. Consequently, PSI can function as a quencher of excitation energy. Eventually, the LHCII phosphorylation, NPQ and the photosynthetic control of linear electron transfer via cytochrome b6f work in concert to maintain the redox poise of the electron transfer chain. This is a prerequisite for successful plant growth upon changing natural light conditions, both in short- and long-term.