885 resultados para Nonlinear operators
Resumo:
This paper applies the Multi-Harmonic Nonlinear Receptance Coupling Approach (MUHANORCA) (Ferreira 1998) to evaluate the frequency response characteristics of a beam which is clamped at one end and supported at the other end by a nonlinear cubic stiffness joint. In order to apply the substructure coupling technique, the problem was characterised by coupling a clamped linear beam with a nonlinear cubic stiffness joint. The experimental results were obtained by a sinusoidal excitation with a special force control algorithm where the level of the fundamental force is kept constant and the level of the harmonics is kept zero for all the frequencies measured.
Resumo:
In this paper is Analyzed the local dynamical behavior of a slewing flexible structure considering nonlinear curvature. The dynamics of the original (nonlinear) governing equations of motion are reduced to the center manifold in the neighborhood of an equilibrium solution with the purpose of locally study the stability of the system. In this critical point, a Hopf bifurcation occurs. In this region, one can find values for the control parameter (structural damping coefficient) where the system is unstable and values where the system stability is assured (periodic motion). This local analysis of the system reduced to the center manifold assures the stable / unstable behavior of the original system around a known solution.
Resumo:
Logistics infrastructure and transportation services have been the liability of countries and governments for decades, or these have been under strict regulation policies. One of the first branches opened for competition in EU as well as in other continents, has been air transports (operators, like passenger and freight) and road transports. These have resulted on lower costs, better connectivity and in most of the cases higher service quality. However, quite large amount of other logistics related activities are still directly (or indirectly) under governmental influence, e.g. railway infrastructure, road infrastructure, railway operations, airports, and sea ports. Due to the globalization, governmental influence is not that necessary in this sector, since transportation needs have increased with much more significant phase as compared to economic growth. Also freight transportation needs do not correlate with passenger side, due to the reason that only small number of areas in the world have specialized in the production of particular goods. Therefore, in number of cases public-private partnership, or even privately owned companies operating in these sub-branches have been identified as beneficial for countries, customers and further economic growth. The objective of this research work is to shed more light on these kinds of experiments, especially in the relatively unknown sub-branches of logistics like railways, airports and sea container transports. In this research work we have selected companies having public listed status in some stock exchange, and have needed amount of financial scale to be considered as serious company rather than start-up phase venture. Our research results show that railways and airports usually need high fixed investments, but have showed in the last five years generally good financial performance, both in terms of profitability and cash flow. In contrary to common belief of prosperity in globally growing container transports, sea vessel operators of containers have not shown that impressive financial performance. Generally margins in this business are thin, and profitability has been sacrificed in front of high growth – this also concerns cash flow performance, which has been lower too. However, as we examine these three logistics sub-branches through shareholder value development angle during time period of 2002-2007, we were surprised to find out that all of these three have outperformed general stock market indexes in this period. More surprising is the result that financially a bit less performing sea container transportation sector shows highest shareholder value gain in the examination period. Thus, it should be remembered that provided analysis shows only limited picture, since e.g. dividends were not taken into consideration in this research work. Therefore, e.g. US railway operators have disadvantage to other in the analysis, since they have been able to provide dividends for shareholders in long period of time. Based on this research work we argue that investment on transportation/logistics sector seems to be safe alternative, which yields with relatively low risk high gain. Although global economy would face smaller growth period, this sector seems to provide opportunities in more demanding situation as well.
Resumo:
Identification of low-dimensional structures and main sources of variation from multivariate data are fundamental tasks in data analysis. Many methods aimed at these tasks involve solution of an optimization problem. Thus, the objective of this thesis is to develop computationally efficient and theoretically justified methods for solving such problems. Most of the thesis is based on a statistical model, where ridges of the density estimated from the data are considered as relevant features. Finding ridges, that are generalized maxima, necessitates development of advanced optimization methods. An efficient and convergent trust region Newton method for projecting a point onto a ridge of the underlying density is developed for this purpose. The method is utilized in a differential equation-based approach for tracing ridges and computing projection coordinates along them. The density estimation is done nonparametrically by using Gaussian kernels. This allows application of ridge-based methods with only mild assumptions on the underlying structure of the data. The statistical model and the ridge finding methods are adapted to two different applications. The first one is extraction of curvilinear structures from noisy data mixed with background clutter. The second one is a novel nonlinear generalization of principal component analysis (PCA) and its extension to time series data. The methods have a wide range of potential applications, where most of the earlier approaches are inadequate. Examples include identification of faults from seismic data and identification of filaments from cosmological data. Applicability of the nonlinear PCA to climate analysis and reconstruction of periodic patterns from noisy time series data are also demonstrated. Other contributions of the thesis include development of an efficient semidefinite optimization method for embedding graphs into the Euclidean space. The method produces structure-preserving embeddings that maximize interpoint distances. It is primarily developed for dimensionality reduction, but has also potential applications in graph theory and various areas of physics, chemistry and engineering. Asymptotic behaviour of ridges and maxima of Gaussian kernel densities is also investigated when the kernel bandwidth approaches infinity. The results are applied to the nonlinear PCA and to finding significant maxima of such densities, which is a typical problem in visual object tracking.
Resumo:
This master thesis work introduces the fuzzy tolerance/equivalence relation and its application in cluster analysis. The work presents about the construction of fuzzy equivalence relations using increasing generators. Here, we investigate and research on the role of increasing generators for the creation of intersection, union and complement operators. The objective is to develop different varieties of fuzzy tolerance/equivalence relations using different varieties of increasing generators. At last, we perform a comparative study with these developed varieties of fuzzy tolerance/equivalence relations in their application to a clustering method.
Resumo:
The objective of this thesis is to develop and generalize further the differential evolution based data classification method. For many years, evolutionary algorithms have been successfully applied to many classification tasks. Evolution algorithms are population based, stochastic search algorithms that mimic natural selection and genetics. Differential evolution is an evolutionary algorithm that has gained popularity because of its simplicity and good observed performance. In this thesis a differential evolution classifier with pool of distances is proposed, demonstrated and initially evaluated. The differential evolution classifier is a nearest prototype vector based classifier that applies a global optimization algorithm, differential evolution, to determine the optimal values for all free parameters of the classifier model during the training phase of the classifier. The differential evolution classifier applies the individually optimized distance measure for each new data set to be classified is generalized to cover a pool of distances. Instead of optimizing a single distance measure for the given data set, the selection of the optimal distance measure from a predefined pool of alternative measures is attempted systematically and automatically. Furthermore, instead of only selecting the optimal distance measure from a set of alternatives, an attempt is made to optimize the values of the possible control parameters related with the selected distance measure. Specifically, a pool of alternative distance measures is first created and then the differential evolution algorithm is applied to select the optimal distance measure that yields the highest classification accuracy with the current data. After determining the optimal distance measures for the given data set together with their optimal parameters, all determined distance measures are aggregated to form a single total distance measure. The total distance measure is applied to the final classification decisions. The actual classification process is still based on the nearest prototype vector principle; a sample belongs to the class represented by the nearest prototype vector when measured with the optimized total distance measure. During the training process the differential evolution algorithm determines the optimal class vectors, selects optimal distance metrics, and determines the optimal values for the free parameters of each selected distance measure. The results obtained with the above method confirm that the choice of distance measure is one of the most crucial factors for obtaining higher classification accuracy. The results also demonstrate that it is possible to build a classifier that is able to select the optimal distance measure for the given data set automatically and systematically. After finding optimal distance measures together with optimal parameters from the particular distance measure results are then aggregated to form a total distance, which will be used to form the deviation between the class vectors and samples and thus classify the samples. This thesis also discusses two types of aggregation operators, namely, ordered weighted averaging (OWA) based multi-distances and generalized ordered weighted averaging (GOWA). These aggregation operators were applied in this work to the aggregation of the normalized distance values. The results demonstrate that a proper combination of aggregation operator and weight generation scheme play an important role in obtaining good classification accuracy. The main outcomes of the work are the six new generalized versions of previous method called differential evolution classifier. All these DE classifier demonstrated good results in the classification tasks.
Resumo:
The two main objectives of Bayesian inference are to estimate parameters and states. In this thesis, we are interested in how this can be done in the framework of state-space models when there is a complete or partial lack of knowledge of the initial state of a continuous nonlinear dynamical system. In literature, similar problems have been referred to as diffuse initialization problems. This is achieved first by extending the previously developed diffuse initialization Kalman filtering techniques for discrete systems to continuous systems. The second objective is to estimate parameters using MCMC methods with a likelihood function obtained from the diffuse filtering. These methods are tried on the data collected from the 1995 Ebola outbreak in Kikwit, DRC in order to estimate the parameters of the system.
Resumo:
The objectives of this study were to evaluate and compare the use of linear and nonlinear methods for analysis of heart rate variability (HRV) in healthy subjects and in patients after acute myocardial infarction (AMI). Heart rate (HR) was recorded for 15 min in the supine position in 10 patients with AMI taking β-blockers (aged 57 ± 9 years) and in 11 healthy subjects (aged 53 ± 4 years). HRV was analyzed in the time domain (RMSSD and RMSM), the frequency domain using low- and high-frequency bands in normalized units (nu; LFnu and HFnu) and the LF/HF ratio and approximate entropy (ApEn) were determined. There was a correlation (P < 0.05) of RMSSD, RMSM, LFnu, HFnu, and the LF/HF ratio index with the ApEn of the AMI group on the 2nd (r = 0.87, 0.65, 0.72, 0.72, and 0.64) and 7th day (r = 0.88, 0.70, 0.69, 0.69, and 0.87) and of the healthy group (r = 0.63, 0.71, 0.63, 0.63, and 0.74), respectively. The median HRV indexes of the AMI group on the 2nd and 7th day differed from the healthy group (P < 0.05): RMSSD = 10.37, 19.95, 24.81; RMSM = 23.47, 31.96, 43.79; LFnu = 0.79, 0.79, 0.62; HFnu = 0.20, 0.20, 0.37; LF/HF ratio = 3.87, 3.94, 1.65; ApEn = 1.01, 1.24, 1.31, respectively. There was agreement between the methods, suggesting that these have the same power to evaluate autonomic modulation of HR in both AMI patients and healthy subjects. AMI contributed to a reduction in cardiac signal irregularity, higher sympathetic modulation and lower vagal modulation.
Travel intermediaries going online - an analysis of the driving forces : case Finnish tour operators
Resumo:
The Finnish Securities Markets are being harmonized to enable better, more reliable and timely settlement of securities. Omnibus accounts are a common practice in the European securities markets. Finland forbids the use of omnibus accounts from its domestic investors. There is a possibility that the omnibus account usage is allowed for Finnish investors in the future. This study aims to build a comprehensive image to Finnish investors and account operators in determining the costs and benefits that the omnibus account structure would have for them. This study uses qualitative research methods. A literature review provides the framework for this study. Different kinds of research articles, regulatory documents, studies performed by European organisations, and Finnish news reportages are used to analyse the costs and benefits of omnibus accounts. The viewpoint is strictly of account operators and investors, and different effects on them are contemplated. The results of the analysis show that there are a number of costs and benefits that investors and account operators must take into consideration regarding omnibus accounts. The costs are related to development of IT-systems so that participants are able to adapt to the new structure and operate according to its needs. Decrease in the holdings’ transparency is a disadvantage of the structure and needs to be assessed precisely to avoid some problems it might bring. Benefits are mostly related to the increased competition in the securities markets as well as to the possible cost reductions of securities settlement. The costs and benefits were analysed according to the study plan of this thesis and as a result, the significance and impact of omnibus accounts to Finnish investors and account operators depends on the competition level and the decisions that all market participants make when determining if the account structure is beneficial for their operations.
Resumo:
This paper derives optimal monetary policy rules in setups where certainty equivalence does not hold because either central bank preferences are not quadratic, and/or the aggregate supply relation is nonlinear. Analytical results show that these features lead to sign and size asymmetries, and nonlinearities in the policy rule. Reduced-form estimates indicate that US monetary policy can be characterized by a nonlinear policy rule after 1983, but not before 1979. This finding is consistent with the view that the Fed's inflation preferences during the Volcker-Greenspan regime differ considerably from the ones during the Burns-Miller regime.