967 resultados para 020503 Nonlinear Optics and Spectroscopy


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Rhythmic activity plays a central role in neural computations and brain functions ranging from homeostasis to attention, as well as in neurological and neuropsychiatric disorders. Despite this pervasiveness, little is known about the mechanisms whereby the frequency and power of oscillatory activity are modulated, and how they reflect the inputs received by neurons. Numerous studies have reported input-dependent fluctuations in peak frequency and power (as well as couplings across these features). However, it remains unresolved what mediates these spectral shifts among neural populations. Extending previous findings regarding stochastic nonlinear systems and experimental observations, we provide analytical insights regarding oscillatory responses of neural populations to stimulation from either endogenous or exogenous origins. Using a deceptively simple yet sparse and randomly connected network of neurons, we show how spiking inputs can reliably modulate the peak frequency and power expressed by synchronous neural populations without any changes in circuitry. Our results reveal that a generic, non-nonlinear and input-induced mechanism can robustly mediate these spectral fluctuations, and thus provide a framework in which inputs to the neurons bidirectionally regulate both the frequency and power expressed by synchronous populations. Theoretical and computational analysis of the ensuing spectral fluctuations was found to reflect the underlying dynamics of the input stimuli driving the neurons. Our results provide insights regarding a generic mechanism supporting spectral transitions observed across cortical networks and spanning multiple frequency bands.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes a very fast method for blindly approximating a nonlinear mapping which transforms a sum of random variables. The estimation is surprisingly good even when the basic assumption is not satisfied.We use the method for providing a good initialization for inverting post-nonlinear mixtures and Wiener systems. Experiments show that the algorithm speed is strongly improved and the asymptotic performance is preserved with a very low extra computational cost.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The kinetics of binding of a glycolipid-anchored protein (the promastigote surface protease, PSP) to planar lecithin bilayers is studied by an integrated optics technique, in which the bilayer membrane is supported on an optical wave guide and the phase velocities of guided light modes in the wave guide are measured. From these velocities, the optical parameters of the membrane and PSP layers deposited on the waveguide are determined, yielding in particular the mass of PSP bound to the membrane, which is followed in real time. From a comparison of the binding rates of PSP and PSP from which the lipid moiety has been removed, it is shown that the lipid moiety plays a key role in anchoring the protein to the membrane. Specific and nonspecific binding of antibodies to membrane-anchored PSP is also investigated. As little as a fifth of a monolayer of PSP is sufficient to suppress the appreciable nonspecific binding of antibodies to the membrane.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

ZnO nanorods grown by both high temperature vapour phase transport and low temperature chemical bath deposition are very promising sources for UV third harmonic generation. Material grown by both methods show comparable efficiencies, in both cases an order of magnitude higher than surface third harmonic generation at the quartz-air interface of a bare quartz substrate. This result is in stark contrast to the linear optical properties of ZnO nanorods grown by these two methods, which show vastly different PL efficiencies. The third harmonic generated signal is analysed using intensity dependent measurements and interferometric frequency resolved optical gating, allowing extraction of the laser pulse parameters. The comparable levels of efficiency of ZnO grown by these very different methods as sources for third harmonic UV generation provides a broad suite of possible growth methods to suit various substrates, coverage and scalability requirements. Potential application areas range from interferometric frequency resolved optical gating characterization of few cycle fs pulses to single cell UV irradiation for biophysical studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this dissertation, Active Galactic Nuclei (AGN) and their host galaxies are discussed. Together with transitional events, such as supernovae and gamma-ray bursts, AGN are the most energetic phenomena in the Universe. The dominant fraction of their luminosity originates from the center of a galaxy, where accreting gas falls into a supermassive black hole, converting gravitational energy to radiation. AGN have a wide range of observed properties: e.g. in their emission lines, radio emission, and variability. Most likely, these properties depend significantly on their orientation to our line-of-sight, and to unify AGN into physical classes it is crucial to observe their orientation-independent properties, such as the host galaxies. Furthermore, host galaxy studies are essential to understand the formation and co-evolution of galactic bulges and supermassive black holes. In this thesis, the main focus is on observationally characterizing AGN host galaxies using optical and near-infrared imaging and spectroscopy. BL Lac objects are a class of AGN characterized by rapidly variable and polarized continuum emission across the electromagnetic spectrum, and coredominated radio emission. The near-infrared properties of intermediate redshift BL Lac host galaxies are studied in Paper I. They are found to be large elliptical galaxies that are more luminous than their low redshift counterparts suggesting a strong luminosity evolution, and a contribution from a recent star formation episode. To analyze the stellar content of galaxies in more detail multicolor data, especially observations at blue wavelengths, are essential. In Paper III, optical - near-infrared colors and color gradients are derived for low redshift BL Lac host galaxies. They show bluer colors and steeper color gradients than inactive ellipticals which, most likely, are caused by a relatively young stellar population indicating a different evolutionary stage between AGN hosts and inactive ellipticals. In Paper II, near-infrared imaging of intermediate redshift radio-quiet quasar hosts is used to study their luminosity evolution. The hosts are large elliptical galaxies, but they are systematically fainter than the hosts of radio-loud quasars at similar redshifts, suggesting a link between the luminosity of the host galaxies and the radio properties of AGN. In Paper IV, the characteristics of near-infrared stellar absorption features of low redshift radio galaxies are compared with those of inactive early-type galaxies. The comparison suggests that early-type galaxies with AGN are in a different evolutionary stage than their inactive counterparts. Moreover, radio galaxies are found to contain stellar populations containing both old and intermediate age components.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Obesity has become the leading cause of many chronic diseases, such as type 2 diabetes and cardiovascular diseases. The prevalence of obesity is high in developed countries and it is also a major cause of the use of health services. Ectopic fat accumulation in organs may lead to metabolic disturbances, such as insulin resistance.Weight loss with very-low-energy diet is known to be safe and efficient. Weight loss improves whole body insulin sensitivity, but its effects on tissue and organ level in vivo are not well known. The aims of the studies were to investigate possible changes of weight loss in glucose and fatty acid uptake and perfusion and fat distribution at tissue and organ level using positron emission tomography and magnetic resonance imaging and spectroscopy in 34 healthy obese subjects. The results showed that whole-body insulin sensitivity increased after weight loss with very-low-energy diet and this is associated with improved skeletal muscle insulin-stimulated glucose uptake, but not with adipose tissue, liver or heart glucose uptake. Liver insulin resistance decreased after weight loss. Liver and heart free fatty acid uptakes decreased concomitantly with liver and heart triglyceride content. Adipose tissue and myocardial perfusion decreased. In conclusion, enhanced skeletal muscle glucose uptake leads to increase in whole-body insulin sensitivity when glucose uptake is preserved in other organs studied. These findings suggest that lipid accumulation found in the liver and the heart in obese subjects without co-morbidies is in part reversible by reduced free fatty acid uptake after weight loss. Reduced lipid accumulation in organs may improve metabolic disturbances, e.g. decrease liver insulin resistance. Keywords: Obesity, weight loss, very-low-energy diet, adipose tissue metabolism, liver metabolism, heart metabolism, positron emission tomography

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This Master's thesis is devoted to semiconductor samples study using time-resolved photoluminescence. This method allows investigating recombination in semiconductor samples in order to develop quality of optoelectronic device. An additional goal was the method accommodation for low-energy-gap materials. The first chapter gives a brief intercourse into the basis of semiconductor physics. The key features of the investigated structures are noted. The usage area of the results covers saturable semiconductor absorber mirrors, disk lasers and vertical-external-cavity surface-emittinglasers. The experiment set-up is described in the second chapter. It is based on up-conversion procedure using a nonlinear crystal and involving the photoluminescent emission and the gate pulses. The limitation of the method was estimated. The first series of studied samples were grown at various temperatures and they suffered rapid thermal annealing. Further, a latticematched and metamorphically grown samples were compared. Time-resolved photoluminescence method was adapted for wavelengths up to 1.5 µm. The results allowed to specify the optimal substrate temperature for MBE process. It was found that the lattice-matched sample and the metamorphically grown sample had similar characteristics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pure and Fe(III)-doped TiO2 suspensions were prepared by the sol gel method with the use of titanium isopropoxide (Ti(OPri)4) as precursor material. The properties of doped materials were compared to TiO2 properties based on the characterization by thermal analysis (TG-DTA and DSC), X-ray powder diffractometry and spectroscopy measurements (FTIR). Both undoped and doped TiO2 suspensions were used to coat metallic substrate as a mean to make thin-film electrodes. Thermal treatment of the precursors at 400ºC for 2 h in air resulted in the formation of nanocrystalline anatase TiO2. The thin-film electrodes were tested with respect to their photocatalytic performance for degradation of a textile dye in aqueous solution. The plain TiO2 remains as the best catalyst at the conditions used in this report.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The stabilizing free energy of ß-trypsin was determined by hydrogen ion titration. In the pH range from 3.0 to 7.0, the change in free energy difference for the stabilization of the native protein relative to the unfolded one (D D G0 titration) was 9.51 ± 0.06 kcal/mol. An isoelectric point of 10.0 was determined, allowing us to calculate the Tanford and Kirkwood electrostatic factor w. This factor presented a nonlinear behavior and indicated more than one type of titratable carboxyl groups in ß-trypsin. In fact, one class of carboxyl group with a pK = 3.91 ± 0.01 and another one with a pK = 4.63 ± 0.03 were also found by hydrogen ion titration of the protein in the folded state

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Identification of low-dimensional structures and main sources of variation from multivariate data are fundamental tasks in data analysis. Many methods aimed at these tasks involve solution of an optimization problem. Thus, the objective of this thesis is to develop computationally efficient and theoretically justified methods for solving such problems. Most of the thesis is based on a statistical model, where ridges of the density estimated from the data are considered as relevant features. Finding ridges, that are generalized maxima, necessitates development of advanced optimization methods. An efficient and convergent trust region Newton method for projecting a point onto a ridge of the underlying density is developed for this purpose. The method is utilized in a differential equation-based approach for tracing ridges and computing projection coordinates along them. The density estimation is done nonparametrically by using Gaussian kernels. This allows application of ridge-based methods with only mild assumptions on the underlying structure of the data. The statistical model and the ridge finding methods are adapted to two different applications. The first one is extraction of curvilinear structures from noisy data mixed with background clutter. The second one is a novel nonlinear generalization of principal component analysis (PCA) and its extension to time series data. The methods have a wide range of potential applications, where most of the earlier approaches are inadequate. Examples include identification of faults from seismic data and identification of filaments from cosmological data. Applicability of the nonlinear PCA to climate analysis and reconstruction of periodic patterns from noisy time series data are also demonstrated. Other contributions of the thesis include development of an efficient semidefinite optimization method for embedding graphs into the Euclidean space. The method produces structure-preserving embeddings that maximize interpoint distances. It is primarily developed for dimensionality reduction, but has also potential applications in graph theory and various areas of physics, chemistry and engineering. Asymptotic behaviour of ridges and maxima of Gaussian kernel densities is also investigated when the kernel bandwidth approaches infinity. The results are applied to the nonlinear PCA and to finding significant maxima of such densities, which is a typical problem in visual object tracking.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The application of continuous positive airway pressure (CPAP) produces important hemodynamic alterations, which can influence breathing pattern (BP) and heart rate variability (HRV). The aim of this study was to evaluate the effects of different levels of CPAP on postoperative BP and HRV after coronary artery bypass grafting (CABG) surgery and the impact of CABG surgery on these variables. Eighteen patients undergoing CABG were evaluated postoperatively during spontaneous breathing (SB) and application of four levels of CPAP applied in random order: sham (3 cmH2O), 5 cmH2O, 8 cmH2O, and 12 cmH2O. HRV was analyzed in time and frequency domains and by nonlinear methods and BP was analyzed in different variables (breathing frequency, inspiratory tidal volume, inspiratory and expiratory time, total breath time, fractional inspiratory time, percent rib cage inspiratory contribution to tidal volume, phase relation during inspiration, phase relation during expiration). There was significant postoperative impairment in HRV and BP after CABG surgery compared to the preoperative period and improvement of DFAα1, DFAα2 and SD2 indexes, and ventilatory variables during postoperative CPAP application, with a greater effect when 8 and 12 cmH2O were applied. A positive correlation (P < 0.05 and r = 0.64; Spearman) was found between DFAα1 and inspiratory time to the delta of 12 cmH2O and SB of HRV and respiratory values. Acute application of CPAP was able to alter cardiac autonomic nervous system control and BP of patients undergoing CABG surgery and 8 and 12 cmH2O of CPAP provided the best performance of pulmonary and cardiac autonomic functions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La thérapie cellulaire est une avenue pleine de promesses pour la régénération myocardique, par le remplacement du tissu nécrosé, ou en prévenant l'apoptose du myocarde survivant, ou encore par l'amélioration de la néovascularisation. Les cellules souches de la moelle osseuse (CSMO) expriment des marqueurs cardiaques in vitro quand elles sont exposées à des inducteurs. Pour cette raison, elles ont été utilisées dans la thérapie cellulaire de l'infarctus au myocarde dans des études pre-cliniques et cliniques. Récemment, il a été soulevé de possibles effets bénéfiques de l'ocytocine (OT) lors d’infarctus. Ainsi, l’OT est un inducteur de différenciation cardiaque des cellules souches embryonnaires, et cette différenciation est véhiculée par la voie de signalisation du monoxyde d’azote (NO)-guanylyl cyclase soluble. Toutefois, des données pharmacocinétiques de l’OT lui attribue un profil non linéaire et celui-ci pourrait expliquer les effets pharmacodynamiques controversés, rapportés dans la lttérature. Les objectifs de ce programme doctoral étaient les suivants : 1) Caractériser le profil pharmacocinétique de différents schémas posologiques d'OT chez le porc, en développant une modélisation pharmacocinétique / pharmacodynamique plus adaptée à intégrer les effets biologiques (rénaux, cardiovasculaires) observés. 2) Isoler, différencier et trouver le temps optimal d’induction de la différenciation pour les CSMO porcines (CSMOp), sur la base de l'expression des facteurs de transcription et des protéines structurales cardiaques retrouvées aux différents passages. 3) Induire et quantifier la différenciation cardiaque par l’OT sur les CSMOp. 4) Vérifier le rôle du NO dans cette différenciation cardiaque sur les CSMOp. Nous avons constaté que le profil pharmacocinétique de l’OT est mieux expliqué par le modèle connu comme target-mediated drug disposition (TMDD), parce que la durée du séjour de l’OT dans l’organisme dépend de sa capacité de liaison à son récepteur, ainsi que de son élimination (métabolisme). D'ailleurs, nous avons constaté que la différenciation cardiomyogénique des CSMOp médiée par l’OT devrait être induite pendant les premiers passages, parce que le nombre de passages modifie le profile phénotypique des CSMOp, ainsi que leur potentiel de différenciation. Nous avons observé que l’OT est un inducteur de la différenciation cardiomyogénique des CSMOp, parce que les cellules induites par l’OT expriment des marqueurs cardiaques, et l'expression de protéines cardiaques spécifiques a été plus abondante dans les cellules traitées à l’OT en comparaison aux cellules traitées avec la 5-azacytidine, qui a été largement utilisée comme inducteur de différenciation cardiaque des cellules souches adultes. Aussi, l’OT a causé la prolifération des CMSOp. Finalement, nous avons observé que l'inhibition de la voie de signalisation du NO affecte de manière significative l'expression des protéines cardiaques spécifiques. En conclusion, ces études précisent un potentiel certain de l’OT dans le cadre de la thérapie cellulaire cardiomyogénique à base de cellules souches adultes, mais soulignent que son utilisation requerra de la prudence et un approfondissement des connaissances.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Alors que l’Imagerie par résonance magnétique (IRM) permet d’obtenir un large éventail de données anatomiques et fonctionnelles, les scanneurs cliniques sont généralement restreints à l’utilisation du proton pour leurs images et leurs applications spectroscopiques. Le phosphore jouant un rôle prépondérant dans le métabolisme énergétique, l’utilisation de cet atome en spectroscopie RM présente un énorme avantage dans l’observation du corps humain. Cela représente un certain nombre de déEis techniques à relever dus à la faible concentration de phosphore et sa fréquence de résonance différente. L’objectif de ce projet a été de développer la capacité à réaliser des expériences de spectroscopie phosphore sur un scanneur IRM clinique de 3 Tesla. Nous présentons ici les différentes étapes nécessaires à la conception et la validation d’une antenne IRM syntonisée à la fréquence du phosphore. Nous présentons aussi l’information relative à réalisation de fantômes utilisés dans les tests de validation et la calibration. Finalement, nous présentons les résultats préliminaires d’acquisitions spectroscopiques sur un muscle humain permettant d’identiEier les différents métabolites phosphorylés à haute énergie. Ces résultats s’inscrivent dans un projet de plus grande envergure où les impacts des changements du métabolisme énergétique sont étudiés en relation avec l’âge et les pathologies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

FPS is a more general form of synchronization. Hyperchaotic systems possessing more than one positive Lypaunov exponent exhibit highly complex behaviour and are more suitable for some applications like secure communications. In this thesis we report studies of FPS and MFPS of a few chaotic and hyperchaotic systems. When all the parameters of the system are known we show that active nonlinear control method can be efectively used to obtain FPS. Adaptive nonlinear control and OPCL control method are employed for obtaining FPS and MFPS when some or all parameters of the system are uncertain. A secure communication scheme based on MFPS is also proposed in theory. All our theoretical calculations are verified by numerical simulations.