804 resultados para multi-dimensional maps


Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis is a study of discrete nonlinear systems represented by one dimensional mappings.As one dimensional interative maps represent Poincarre sections of higher dimensional flows,they offer a convenient means to understand the dynamical evolution of many physical systems.It highlighting the basic ideas of deterministic chaos.Qualitative and quantitative measures for the detection and characterization of chaos in nonlinear systems are discussed.Some simple mathematical models exhibiting chaos are presented.The bifurcation scenario and the possible routes to chaos are explained.It present the results of the numerical computational of the Lyapunov exponents (λ) of one dimensional maps.This thesis focuses on the results obtained by our investigations on combinations maps,scaling behaviour of the Lyapunov characteristic exponents of one dimensional maps and the nature of bifurcations in a discontinous logistic map.It gives a review of the major routes to chaos in dissipative systems,namely, Period-doubling ,Intermittency and Crises.This study gives a theoretical understanding of the route to chaos in discontinous systems.A detailed analysis of the dynamics of a discontinous logistic map is carried out, both analytically and numerically ,to understand the route it follows to chaos.The present analysis deals only with the case of the discontinuity parameter applied to the right half of the interval of mapping.A detailed analysis for the n –furcations of various periodicities can be made and a more general theory for the map with discontinuities applied at different positions can be on a similar footing

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The study of simple chaotic maps for non-equilibrium processes in statistical physics has been one of the central themes in the theory of chaotic dynamical systems. Recently, many works have been carried out on deterministic diffusion in spatially extended one-dimensional maps This can be related to real physical systems such as Josephson junctions in the presence of microwave radiation and parametrically driven oscillators. Transport due to chaos is an important problem in Hamiltonian dynamics also. A recent approach is to evaluate the exact diffusion coefficient in terms of the periodic orbits of the system in the form of cycle expansions. But the fact is that the chaotic motion in such spatially extended maps has two complementary aspects- - diffusion and interrnittency. These are related to the time evolution of the probability density function which is approximately Gaussian by central limit theorem. It is noticed that the characteristic function method introduced by Fujisaka and his co-workers is a very powerful tool for analysing both these aspects of chaotic motion. The theory based on characteristic function actually provides a thermodynamic formalism for chaotic systems It can be applied to other types of chaos-induced diffusion also, such as the one arising in statistics of trajectory separation. It was noted that there is a close connection between cycle expansion technique and characteristic function method. It was found that this connection can be exploited to enhance the applicability of the cycle expansion technique. In this way, we found that cycle expansion can be used to analyse the probability density function in chaotic maps. In our research studies we have successfully applied the characteristic function method and cycle expansion technique for analysing some chaotic maps. We introduced in this connection, two classes of chaotic maps with variable shape by generalizing two types of maps well known in literature.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Self-Organizing Map (SOM) is a popular unsupervised neural network able to provide effective clustering and data visualization for multidimensional input datasets. In this paper, we present an application of the simulated annealing procedure to the SOM learning algorithm with the aim to obtain a fast learning and better performances in terms of quantization error. The proposed learning algorithm is called Fast Learning Self-Organized Map, and it does not affect the easiness of the basic learning algorithm of the standard SOM. The proposed learning algorithm also improves the quality of resulting maps by providing better clustering quality and topology preservation of input multi-dimensional data. Several experiments are used to compare the proposed approach with the original algorithm and some of its modification and speed-up techniques.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The purpose of this paper is to investigate several analytical methods of solving first passage (FP) problem for the Rouse model, a simplest model of a polymer chain. We show that this problem has to be treated as a multi-dimensional Kramers' problem, which presents rich and unexpected behavior. We first perform direct and forward-flux sampling (FFS) simulations, and measure the mean first-passage time $\tau(z)$ for the free end to reach a certain distance $z$ away from the origin. The results show that the mean FP time is getting faster if the Rouse chain is represented by more beads. Two scaling regimes of $\tau(z)$ are observed, with transition between them varying as a function of chain length. We use these simulations results to test two theoretical approaches. One is a well known asymptotic theory valid in the limit of zero temperature. We show that this limit corresponds to fully extended chain when each chain segment is stretched, which is not particularly realistic. A new theory based on the well known Freidlin-Wentzell theory is proposed, where dynamics is projected onto the minimal action path. The new theory predicts both scaling regimes correctly, but fails to get the correct numerical prefactor in the first regime. Combining our theory with the FFS simulations lead us to a simple analytical expression valid for all extensions and chain lengths. One of the applications of polymer FP problem occurs in the context of branched polymer rheology. In this paper, we consider the arm-retraction mechanism in the tube model, which maps exactly on the model we have solved. The results are compared to the Milner-McLeish theory without constraint release, which is found to overestimate FP time by a factor of 10 or more.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We study the growth of Df `` (f(c)) when f is a Fibonacci critical covering map of the circle with negative Schwarzian derivative, degree d >= 2 and critical point c of order l > 1. As an application we prove that f exhibits exponential decay of geometry if and only if l <= 2, and in this case it has an absolutely continuous invariant probability measure, although not satisfying the so-called Collet-Eckmann condition. (C) 2009 Elsevier Masson SAS. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

O ICTM (Interval Categorizer Tesselation Model), objeto da presente tese, é um modelo geral para análise de espaços de natureza geométrica, baseado em tesselaçoes, que é capaz de produzir uma categorização confiável de conjunto de pontos de um dado espaço, de acordo com múltiplas características dos pontos, cada característica correspondendo a uma camada do modelo. Por exemplo, na análise de terrenos geográficos, uma região geográfica pode ser analisada de acordo com a sua topografia, vegetaçao, demografia, dados econômicos etc, cada uma gerando uma subdivisão diferente da região. O modelo geral baseado em tesselações não está restrito, porém, a análise de espaços bi-dimensionais. O conjunto dos pontos analisados pode pertencer a um espaço multidimensional, determinando a característica multi-dimensional de cada camada. Um procedimento de projeção das categorizações obtidas em cada camada sobre uma camada básica leva a uma categorização confiavel mais significante, que combina em uma só classificação as análises obtidas para cada característica. Isto permite muitas análises interessantes no que tange a dependência mútua das características. A dimensão da tesselação pode ser arbitrária ou escolhida de acordo com algum critério específico estabelecido pela aplicação. Neste caso, a categorização obtida pode ser refinada, ou pela re-definição da dimensão da tesselação ou tomando cada sub-região resultante para ser analisada separadamente A formalização nos registradores pode ser facilmente recuperada apenas pela indexação dos elementos das matrizes, em qualquer momento da execução. A implementação do modelo é naturalmente paralela, uma vez que a análise é feita basicamente por regras locais. Como os dados de entrada numéricos são usualmente suscetíveis a erros, o modelo utiliza a aritmética intervalar para se ter um controle automático de erros. O modelo ICTM também suporta a extração de fatos sobre as regiões de modo qualitativo, por sentenças lógicas, ou quantitativamente, pela análise de probabilidade. Este trabalho recebe apoio nanceiro do CNPq/CTPETRO e FAPERGS.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Este estudo buscou verificar a influencia dos agentes da cadeia de suprimentos no desempenho do desenvolvimento de novos produtos quando os agentes são analisados em conjunto. A motivação desta pesquisa veio de estudos que alertaram para a consideração da integração da cadeia de suprimentos como um constructo multidimensional, englobando o envolvimento da manufatura, fornecedores e clientes no desenvolvimento de novos produtos; e devido à falta de informação sobre as influencias individuais destes agentes no desenvolvimento de novos produtos. Sob essas considerações, buscou-se construir um modelo analítico baseado na Teoria do Capital Social e Capacidade Absortiva, construir hipóteses a partir da revisão da literatura e conectar constructos como cooperação, envolvimento do fornecedor no desenvolvimento de novos produtos (DNP), envolvimento do cliente no DNP, envolvimento da manufatura no DNP, antecipação de novas tecnologias, melhoria contínua, desempenho operacional do DNP, desempenho de mercado do NPD e desempenho de negócio do DNP. Para testar as hipóteses foram consideradas três variáveis moderadoras, tais como turbulência ambiental (baixa, média e alta), indústria (eletrônicos, maquinários e equipamentos de transporte) e localização (América, Europa e Ásia). Para testar o modelo foram usados dados do projeto High Performance Manufacturing que contém 339 empresas das indústrias de eletrônicos, maquinários e equipamentos de transporte, localizadas em onze países. As hipóteses foram testadas por meio da Análise Fatorial Confirmatória (AFC) incluindo a moderação muti-grupo para as três variáveis moderadoras mencionadas anteriormente. Os principais resultados apontaram que as hipóteses relacionadas com cooperação foram confirmadas em ambientes de média turbulência, enquanto as hipóteses relacionadas ao desempenho no DNP foram confirmadas em ambientes de baixa turbulência ambiental e em países asiáticos. Adicionalmente, sob as mesmas condições, fornecedores, clientes e manufatura influenciam diferentemente no desempenho de novos produtos. Assim, o envolvimento de fornecedores influencia diretamente no desempenho operacional e indiretamente no desempenho de mercado e de negócio em baixos níveis de turbulência ambiental, na indústria de equipamentos de transporte em países da Americanos e Europeus. De igual forma, o envolvimento do cliente influenciou diretamente no desempenho operacional e indiretamente no desempenho de mercado e do negócio em médio nível de turbulência ambiental, na indústria de maquinários e em países Asiáticos. Fornecedores e clientes não influenciam diretamente no desempenho de mercado e do negócio e não influenciam indiretamente no desempenho operacional. O envolvimento da manufatura não influenciou nenhum tipo de desempenho do desenvolvimento de novos produtos em todos os cenários testados.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Klimamontoring benötigt eine operative, raum-zeitliche Analyse der Klimavariabilität. Mit dieser Zielsetzung, funktionsbereite Karten regelmäßig zu erstellen, ist es hilfreich auf einen Blick, die räumliche Variabilität der Klimaelemente in der zeitlichen Veränderungen darzustellen. Für aktuelle und kürzlich vergangene Jahre entwickelte der Deutsche Wetterdienst ein Standardverfahren zur Erstellung solcher Karten. Die Methode zur Erstellung solcher Karten variiert für die verschiedenen Klimaelemente bedingt durch die Datengrundlage, die natürliche Variabilität und der Verfügbarkeit der in-situ Daten.rnIm Rahmen der Analyse der raum-zeitlichen Variabilität innerhalb dieser Dissertation werden verschiedene Interpolationsverfahren auf die Mitteltemperatur der fünf Dekaden der Jahre 1951-2000 für ein relativ großes Gebiet, der Region VI der Weltorganisation für Meteorologie (Europa und Naher Osten) angewendet. Die Region deckt ein relativ heterogenes Arbeitsgebiet von Grönland im Nordwesten bis Syrien im Südosten hinsichtlich der Klimatologie ab.rnDas zentrale Ziel der Dissertation ist eine Methode zur räumlichen Interpolation der mittleren Dekadentemperaturwerte für die Region VI zu entwickeln. Diese Methode soll in Zukunft für die operative monatliche Klimakartenerstellung geeignet sein. Diese einheitliche Methode soll auf andere Klimaelemente übertragbar und mit der entsprechenden Software überall anwendbar sein. Zwei zentrale Datenbanken werden im Rahmen dieser Dissertation verwendet: So genannte CLIMAT-Daten über dem Land und Schiffsdaten über dem Meer.rnIm Grunde wird die Übertragung der Punktwerte der Temperatur per räumlicher Interpolation auf die Fläche in drei Schritten vollzogen. Der erste Schritt beinhaltet eine multiple Regression zur Reduktion der Stationswerte mit den vier Einflussgrößen der Geographischen Breite, der Höhe über Normalnull, der Jahrestemperaturamplitude und der thermischen Kontinentalität auf ein einheitliches Niveau. Im zweiten Schritt werden die reduzierten Temperaturwerte, so genannte Residuen, mit der Interpolationsmethode der Radialen Basis Funktionen aus der Gruppe der Neuronalen Netzwerk Modelle (NNM) interpoliert. Im letzten Schritt werden die interpolierten Temperaturraster mit der Umkehrung der multiplen Regression aus Schritt eins mit Hilfe der vier Einflussgrößen auf ihr ursprüngliches Niveau hochgerechnet.rnFür alle Stationswerte wird die Differenz zwischen geschätzten Wert aus der Interpolation und dem wahren gemessenen Wert berechnet und durch die geostatistische Kenngröße des Root Mean Square Errors (RMSE) wiedergegeben. Der zentrale Vorteil ist die wertegetreue Wiedergabe, die fehlende Generalisierung und die Vermeidung von Interpolationsinseln. Das entwickelte Verfahren ist auf andere Klimaelemente wie Niederschlag, Schneedeckenhöhe oder Sonnenscheindauer übertragbar.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper proposes a new multi-objective estimation of distribution algorithm (EDA) based on joint modeling of objectives and variables. This EDA uses the multi-dimensional Bayesian network as its probabilistic model. In this way it can capture the dependencies between objectives, variables and objectives, as well as the dependencies learnt between variables in other Bayesian network-based EDAs. This model leads to a problem decomposition that helps the proposed algorithm to find better trade-off solutions to the multi-objective problem. In addition to Pareto set approximation, the algorithm is also able to estimate the structure of the multi-objective problem. To apply the algorithm to many-objective problems, the algorithm includes four different ranking methods proposed in the literature for this purpose. The algorithm is applied to the set of walking fish group (WFG) problems, and its optimization performance is compared with an evolutionary algorithm and another multi-objective EDA. The experimental results show that the proposed algorithm performs significantly better on many of the problems and for different objective space dimensions, and achieves comparable results on some compared with the other algorithms.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Probabilistic modeling is the de�ning characteristic of estimation of distribution algorithms (EDAs) which determines their behavior and performance in optimization. Regularization is a well-known statistical technique used for obtaining an improved model by reducing the generalization error of estimation, especially in high-dimensional problems. `1-regularization is a type of this technique with the appealing variable selection property which results in sparse model estimations. In this thesis, we study the use of regularization techniques for model learning in EDAs. Several methods for regularized model estimation in continuous domains based on a Gaussian distribution assumption are presented, and analyzed from di�erent aspects when used for optimization in a high-dimensional setting, where the population size of EDA has a logarithmic scale with respect to the number of variables. The optimization results obtained for a number of continuous problems with an increasing number of variables show that the proposed EDA based on regularized model estimation performs a more robust optimization, and is able to achieve signi�cantly better results for larger dimensions than other Gaussian-based EDAs. We also propose a method for learning a marginally factorized Gaussian Markov random �eld model using regularization techniques and a clustering algorithm. The experimental results show notable optimization performance on continuous additively decomposable problems when using this model estimation method. Our study also covers multi-objective optimization and we propose joint probabilistic modeling of variables and objectives in EDAs based on Bayesian networks, speci�cally models inspired from multi-dimensional Bayesian network classi�ers. It is shown that with this approach to modeling, two new types of relationships are encoded in the estimated models in addition to the variable relationships captured in other EDAs: objectivevariable and objective-objective relationships. An extensive experimental study shows the e�ectiveness of this approach for multi- and many-objective optimization. With the proposed joint variable-objective modeling, in addition to the Pareto set approximation, the algorithm is also able to obtain an estimation of the multi-objective problem structure. Finally, the study of multi-objective optimization based on joint probabilistic modeling is extended to noisy domains, where the noise in objective values is represented by intervals. A new version of the Pareto dominance relation for ordering the solutions in these problems, namely �-degree Pareto dominance, is introduced and its properties are analyzed. We show that the ranking methods based on this dominance relation can result in competitive performance of EDAs with respect to the quality of the approximated Pareto sets. This dominance relation is then used together with a method for joint probabilistic modeling based on `1-regularization for multi-objective feature subset selection in classi�cation, where six di�erent measures of accuracy are considered as objectives with interval values. The individual assessment of the proposed joint probabilistic modeling and solution ranking methods on datasets with small-medium dimensionality, when using two di�erent Bayesian classi�ers, shows that comparable or better Pareto sets of feature subsets are approximated in comparison to standard methods.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The master thesis presents methods for intellectual analysis and visualization 3D EKG in order to increase the efficiency of ECG analysis by extracting additional data. Visualization is presented as part of the signal analysis tasks considered imaging techniques and their mathematical description. Have been developed algorithms for calculating and visualizing the signal attributes are described using mathematical methods and tools for mining signal. The model of patterns searching for comparison purposes of accuracy of methods was constructed, problems of a clustering and classification of data are solved, the program of visualization of data is also developed. This approach gives the largest accuracy in a task of the intellectual analysis that is confirmed in this work. Considered visualization and analysis techniques are also applicable to the multi-dimensional signals of a different kind.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The notorious "dimensionality curse" is a well-known phenomenon for any multi-dimensional indexes attempting to scale up to high dimensions. One well-known approach to overcome degradation in performance with respect to increasing dimensions is to reduce the dimensionality of the original dataset before constructing the index. However, identifying the correlation among the dimensions and effectively reducing them are challenging tasks. In this paper, we present an adaptive Multi-level Mahalanobis-based Dimensionality Reduction (MMDR) technique for high-dimensional indexing. Our MMDR technique has four notable features compared to existing methods. First, it discovers elliptical clusters for more effective dimensionality reduction by using only the low-dimensional subspaces. Second, data points in the different axis systems are indexed using a single B+-tree. Third, our technique is highly scalable in terms of data size and dimension. Finally, it is also dynamic and adaptive to insertions. An extensive performance study was conducted using both real and synthetic datasets, and the results show that our technique not only achieves higher precision, but also enables queries to be processed efficiently. Copyright Springer-Verlag 2005

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Марусия Н. Славчова-Божкова - В настоящата работа се обобщава една гранична теорема за докритичен многомерен разклоняващ се процес, зависещ от възрастта на частиците с два типа имиграция. Целта е да се обобщи аналогичен резултат в едномерния случай като се прилагат “coupling” метода, теория на възстановяването и регенериращи процеси.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Report published in the Proceedings of the National Conference on "Education and Research in the Information Society", Plovdiv, May, 2014

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Technology-based self-service (TBSS) enables consumers to complete services themselves using a technological interface. As evaluations of consumer satisfaction and commitment have typically focused on interpersonal interactions, the effect of TBSS on these is under researched . This paper explores the impact of TBSS on consumer satisfaction and on a multidimensional measure of consumer commitment.Data are collected from 241 hotel guests. The results suggest personal-service is more important for satisfaction and commitment. This has implications for marketing as the benefits of adopting TBSS are not clear. Multi-dimensional commitment provides some interesting findings and suggests the need for further research into TBSS and commitment.