968 resultados para Non linear processes
Resumo:
Light in its physical and philosophical sense has captured the imagination of human mind right from the dawn of civilization. The invention of lasers in the 60’s caused a renaissance in the field of optics. This intense, monochromatic, highly directional radiation created new frontiers in science and technology. The strong oscillating electric field of laser radiation creates a. polarisation response that is nonlinear in character in the medium through which it passes and the medium acts as a new source of optical field with alternate properties. It was in this context, that the field of optoelectronics which encompasses the generation, modulation, transmission etc. of optical radiation has gained tremendous importance. Organic molecules and polymeric systems have emerged as a class of promising materials of optoelectronics because they offer the flexibility, both at the molecular and bulk levels, to optimize the nonlinearity and other suitable properties for device applications. Organic nonlinear optical media, which yield large third-order nonlinearities, have been widely studied to develop optical devices like high speed switches, optical limiters etc. Transparent polymeric materials have found one of their most promising applicationsin lasers, in which they can be used as active elements with suitable laser dyes doped in it. The solid-matrix dye lasers make possible combination of the advantages of solid state lasers with the possibility of tuning the radiation over a broad spectral range. The polymeric matrices impregnated with organic dyes have not yet widely used because of the low resistance of the polymeric matrices to laser damage, their low dye photostability, and low dye stability over longer time of operation and storage. In this thesis we investigate the nonlinear and radiative properties of certain organic materials and doped polymeric matrix and their possible role in device development
Resumo:
El benestar psicològic, entès com la vessant psicològica que forma part del concepte més ampli de qualitat de vida, constitueix un àmbit d'estudi en expansió. Tot i tenir un passat més breu en comparació amb d'altres constructes psicosocials, cada vegada investigadors de les més diverses disciplines s'afegeixen a la llista d'estudiosos que fan del benestar psicològic un dels seus objectes d'investigació. Amb tot, l'estudi del benestar psicològic en l'adolescència constitueix probablement un dels àmbits en els quals la necessitat de seguir avançant es fa més evident. El seu estudi en subjectes adolescents té, a més, un doble interès. Per una part, els canvis i transicions que nois i noies experimenten durant l'adolescència comporten amb freqüència que sigui un període estressant per a molts d'ells/es, amb implicacions importants per al seu benestar psicològic. Aprofundir en el seu coneixement durant aquest període té un interès més enllà de l'estrictament científic i permet el disseny de programes de prevenció més ajustats a les problemàtiques que els/les adolescents puguin estar experimentant. L'exploració dels elements del benestar psicològic constitueix una de les estratègies d'aproximació al seu estudi. En aquesta tesi doctoral s'han seleccionat alguns dels elements que de la literatura científica es desprèn que tenen una connexió més estreta amb el benestar psicològic i que són la satisfacció amb la vida globalment i amb àmbits específics de la vida, l'autoestima, el suport social percebut, la percepció de control i els valors. Tot i que existeix un consens elevat en considerar que l'exploració d'aquests elements és de primera necessitat de cares a aprofundir en l'estructura del benestar psicològic, generalment han estat estudiats de forma separada, malgrat no falten intents d'integració teòrica. Les limitacions més importants que presenta l'estudi del benestar psicològic i el dels seus elements en l'actualitat són bàsicament de caràcter epistemològic i fan referència a la dificultat de trobar visions comunes (tant a nivell de definicions com de teories explicatives) compartides per una majoria d'investigadors socials. Aquestes limitacions justifiquen l'interès per dirigir l'atenció vers un altre tipus d'explicacions del benestar psicològic, qualitativament diferents a les disponibles, que no es refugiïn ni en reduccionismes ni en explicacions causals rígides. Les teories de la complexitat suposen una alternativa productiva en aquest sentit ja que aquelles característiques a través de les quals la complexitat ve donada (borrositat de límits, punts de catàstrofe, dimensions fractals, processos caòtics i no lineals), són, en definitiva, les mateixes propietats que caracteritzen als fenòmens psicosocials. I això inclou el de benestar psicològic. Les dades de les que disposem, obtingudes mitjançant un estudi transversal, impedeixen fer una aproximació al benestar psicològic des de totes les propietats de la complexitat esmentades a excepció de la característica de la no linealitat. L'objectiu general de la tesi ha estat el de construir un model de benestar psicològic a partir de les dades obtingudes que permetés: 1) Evidenciar relacions entre variables que fins aquests moments no han pogut ser massa explorades, 2) Contemplar aquestes relacions més enllà de la seva unidireccionalitat, i 3) Entendre el benestar psicològic en l'adolescència des d'un punt de vista més integrador i holista i, consegüentment, oferir una manera més comprehensiva d'aproximar-se a aquest fenomen. Aquesta tesi ha de ser entesa com un primer pas, fonamentalment metodològic, per l'elaboració futura de conceptualizacions sobre el benestar psicològic en l'adolescència que es basin en els principis que ens aporten les ciències de la complexitat. Malgrat els resultats obtinguts no estan absents de limitacions, obren noves perspectives d'anàlisi del benestar psicològic en l'adolescència.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Artificial neural networks (ANNs) have been widely applied to the resolution of complex biological problems. An important feature of neural models is that their implementation is not precluded by the theoretical distribution shape of the data used. Frequently, the performance of ANNs over linear or non-linear regression-based statistical methods is deemed to be significantly superior if suitable sample sizes are provided, especially in multidimensional and non-linear processes. The current work was aimed at utilising three well-known neural network methods in order to evaluate whether these models would be able to provide more accurate outcomes in relation to a conventional regression method in pupal weight predictions of Chrysomya megacephala, a species of blowfly (Diptera: Calliphoridae), using larval density (i.e. the initial number of larvae), amount of available food and pupal size as input data. It was possible to notice that the neural networks yielded more accurate performances in comparison with the statistical model (multiple regression). Assessing the three types of networks utilised (Multi-layer Perceptron, Radial Basis Function and Generalised Regression Neural Network), no considerable differences between these models were detected. The superiority of these neural models over a classical statistical method represents an important fact, because more accurate models may clarify several intricate aspects concerning the nutritional ecology of blowflies.
Resumo:
This thesis describes advances in the characterisation, calibration and data processing of optical coherence tomography (OCT) systems. Femtosecond (fs) laser inscription was used for producing OCT-phantoms. Transparent materials are generally inert to infra-red radiations, but with fs lasers material modification occurs via non-linear processes when the highly focused light source interacts with the materials. This modification is confined to the focal volume and is highly reproducible. In order to select the best inscription parameters, combination of different inscription parameters were tested, using three fs laser systems, with different operating properties, on a variety of materials. This facilitated the understanding of the key characteristics of the produced structures with the aim of producing viable OCT-phantoms. Finally, OCT-phantoms were successfully designed and fabricated in fused silica. The use of these phantoms to characterise many properties (resolution, distortion, sensitivity decay, scan linearity) of an OCT system was demonstrated. Quantitative methods were developed to support the characterisation of an OCT system collecting images from phantoms and also to improve the quality of the OCT images. Characterisation methods include the measurement of the spatially variant resolution (point spread function (PSF) and modulation transfer function (MTF)), sensitivity and distortion. Processing of OCT data is a computer intensive process. Standard central processing unit (CPU) based processing might take several minutes to a few hours to process acquired data, thus data processing is a significant bottleneck. An alternative choice is to use expensive hardware-based processing such as field programmable gate arrays (FPGAs). However, recently graphics processing unit (GPU) based data processing methods have been developed to minimize this data processing and rendering time. These processing techniques include standard-processing methods which includes a set of algorithms to process the raw data (interference) obtained by the detector and generate A-scans. The work presented here describes accelerated data processing and post processing techniques for OCT systems. The GPU based processing developed, during the PhD, was later implemented into a custom built Fourier domain optical coherence tomography (FD-OCT) system. This system currently processes and renders data in real time. Processing throughput of this system is currently limited by the camera capture rate. OCTphantoms have been heavily used for the qualitative characterization and adjustment/ fine tuning of the operating conditions of OCT system. Currently, investigations are under way to characterize OCT systems using our phantoms. The work presented in this thesis demonstrate several novel techniques of fabricating OCT-phantoms and accelerating OCT data processing using GPUs. In the process of developing phantoms and quantitative methods, a thorough understanding and practical knowledge of OCT and fs laser processing systems was developed. This understanding leads to several novel pieces of research that are not only relevant to OCT but have broader importance. For example, extensive understanding of the properties of fs inscribed structures will be useful in other photonic application such as making of phase mask, wave guides and microfluidic channels. Acceleration of data processing with GPUs is also useful in other fields.
Resumo:
The solar driven photo-Fenton process for treating water containing phenol as a contaminant has been evaluated by means of pilot-scale experiments with a parabolic trough solar reactor (PTR). The effects of Fe(II) (0.04-1.0 mmol L(-1)), H(2)O(2) (7-270 mmol L(-1)), initial phenol concentration (100 and 500 mg C L(-1)), solar radiation, and operation mode (batch and fed-batch) on the process efficiency were investigated. More than 90% of the dissolved organic carbon (DOC) was removed within 3 hours of irradiation or less, a performance equivalent to that of artificially-irradiated reactors, indicating that solar light can be used either as an effective complementary or as an alternative source of photons for the photo-Fenton degradation process. A non-linear multivariable model based on a neural network was fit to the experimental results of batch-mode experiments in order to evaluate the relative importance of the process variables considered on the DOC removal over the reaction time. This included solar radiation, which is not a controlled variable. The observed behavior of the system in batch-mode was compared with fed-batch experiments carried out under similar conditions. The main contribution of the study consists of the results from experiments under different conditions and the discussion of the system behavior. Both constitute important information for the design and scale-up of solar radiation-based photodegradation processes.
Resumo:
The use of perturbation and power transformation operations permits the investigation of linear processes in the simplex as in a vectorial space. When the investigated geochemical processes can be constrained by the use of well-known starting point, the eigenvectors of the covariance matrix of a non-centred principalcomponent analysis allow to model compositional changes compared with a reference point.The results obtained for the chemistry of water collected in River Arno (central-northern Italy) have open new perspectives for considering relative changes of the analysed variables and to hypothesise the relative effect of different acting physical-chemical processes, thus posing the basis for a quantitative modelling
Resumo:
A new parametric minimum distance time-domain estimator for ARFIMA processes is introduced in this paper. The proposed estimator minimizes the sum of squared correlations of residuals obtained after filtering a series through ARFIMA parameters. The estimator iseasy to compute and is consistent and asymptotically normally distributed for fractionallyintegrated (FI) processes with an integration order d strictly greater than -0.75. Therefore, it can be applied to both stationary and non-stationary processes. Deterministic components are also allowed in the DGP. Furthermore, as a by-product, the estimation procedure provides an immediate check on the adequacy of the specified model. This is so because the criterion function, when evaluated at the estimated values, coincides with the Box-Pierce goodness of fit statistic. Empirical applications and Monte-Carlo simulations supporting the analytical results and showing the good performance of the estimator in finite samples are also provided.
Resumo:
This study analyzed high-density event-related potentials (ERPs) within an electrical neuroimaging framework to provide insights regarding the interaction between multisensory processes and stimulus probabilities. Specifically, we identified the spatiotemporal brain mechanisms by which the proportion of temporally congruent and task-irrelevant auditory information influences stimulus processing during a visual duration discrimination task. The spatial position (top/bottom) of the visual stimulus was indicative of how frequently the visual and auditory stimuli would be congruent in their duration (i.e., context of congruence). Stronger influences of irrelevant sound were observed when contexts associated with a high proportion of auditory-visual congruence repeated and also when contexts associated with a low proportion of congruence switched. Context of congruence and context transition resulted in weaker brain responses at 228 to 257 ms poststimulus to conditions giving rise to larger behavioral cross-modal interactions. Importantly, a control oddball task revealed that both congruent and incongruent audiovisual stimuli triggered equivalent non-linear multisensory interactions when congruence was not a relevant dimension. Collectively, these results are well explained by statistical learning, which links a particular context (here: a spatial location) with a certain level of top-down attentional control that further modulates cross-modal interactions based on whether a particular context repeated or changed. The current findings shed new light on the importance of context-based control over multisensory processing, whose influences multiplex across finer and broader time scales.
Resumo:
The estimation of losses plays a key role in the process of building any electrical machine. How to estimate those losses while designing any machine; by obtaining the characteristic of the electrical steel from the catalogue and calculate the losses. However, this way is inaccurate since the electrical steel performs several manufacturing processes during the process of building any machine, which affects directly the magnetic property of the electrical steel and accordingly the characteristic of the electrical steel will be affected. That means the B–H curve of the steel that was obtained from the catalogue will be changed. Moreover, during loading and rotating the machine, some important changes occur to the B–H characteristic of the electrical steel such as the stress on the laminated iron. Accordingly, the pre-estimated losses are completely far from the actual losses because they were estimated based on the data of the electrical steel obtained from the catalogue. So in order to estimate the losses precisely significant factors of the manufacturing processes must be included. The paper introduces the systematic estimation of the losses including the effect of one of the manufacturing factors. Similarly, any other manufacturing factor can be included in the pre-designed losses estimations.
Resumo:
L'objectif du présent mémoire vise à présenter des modèles de séries chronologiques multivariés impliquant des vecteurs aléatoires dont chaque composante est non-négative. Nous considérons les modèles vMEM (modèles vectoriels et multiplicatifs avec erreurs non-négatives) présentés par Cipollini, Engle et Gallo (2006) et Cipollini et Gallo (2010). Ces modèles représentent une généralisation au cas multivarié des modèles MEM introduits par Engle (2002). Ces modèles trouvent notamment des applications avec les séries chronologiques financières. Les modèles vMEM permettent de modéliser des séries chronologiques impliquant des volumes d'actif, des durées, des variances conditionnelles, pour ne citer que ces applications. Il est également possible de faire une modélisation conjointe et d'étudier les dynamiques présentes entre les séries chronologiques formant le système étudié. Afin de modéliser des séries chronologiques multivariées à composantes non-négatives, plusieurs spécifications du terme d'erreur vectoriel ont été proposées dans la littérature. Une première approche consiste à considérer l'utilisation de vecteurs aléatoires dont la distribution du terme d'erreur est telle que chaque composante est non-négative. Cependant, trouver une distribution multivariée suffisamment souple définie sur le support positif est plutôt difficile, au moins avec les applications citées précédemment. Comme indiqué par Cipollini, Engle et Gallo (2006), un candidat possible est une distribution gamma multivariée, qui impose cependant des restrictions sévères sur les corrélations contemporaines entre les variables. Compte tenu que les possibilités sont limitées, une approche possible est d'utiliser la théorie des copules. Ainsi, selon cette approche, des distributions marginales (ou marges) peuvent être spécifiées, dont les distributions en cause ont des supports non-négatifs, et une fonction de copule permet de tenir compte de la dépendance entre les composantes. Une technique d'estimation possible est la méthode du maximum de vraisemblance. Une approche alternative est la méthode des moments généralisés (GMM). Cette dernière méthode présente l'avantage d'être semi-paramétrique dans le sens que contrairement à l'approche imposant une loi multivariée, il n'est pas nécessaire de spécifier une distribution multivariée pour le terme d'erreur. De manière générale, l'estimation des modèles vMEM est compliquée. Les algorithmes existants doivent tenir compte du grand nombre de paramètres et de la nature élaborée de la fonction de vraisemblance. Dans le cas de l'estimation par la méthode GMM, le système à résoudre nécessite également l'utilisation de solveurs pour systèmes non-linéaires. Dans ce mémoire, beaucoup d'énergies ont été consacrées à l'élaboration de code informatique (dans le langage R) pour estimer les différents paramètres du modèle. Dans le premier chapitre, nous définissons les processus stationnaires, les processus autorégressifs, les processus autorégressifs conditionnellement hétéroscédastiques (ARCH) et les processus ARCH généralisés (GARCH). Nous présentons aussi les modèles de durées ACD et les modèles MEM. Dans le deuxième chapitre, nous présentons la théorie des copules nécessaire pour notre travail, dans le cadre des modèles vectoriels et multiplicatifs avec erreurs non-négatives vMEM. Nous discutons également des méthodes possibles d'estimation. Dans le troisième chapitre, nous discutons les résultats des simulations pour plusieurs méthodes d'estimation. Dans le dernier chapitre, des applications sur des séries financières sont présentées. Le code R est fourni dans une annexe. Une conclusion complète ce mémoire.
Resumo:
The use of perturbation and power transformation operations permits the investigation of linear processes in the simplex as in a vectorial space. When the investigated geochemical processes can be constrained by the use of well-known starting point, the eigenvectors of the covariance matrix of a non-centred principal component analysis allow to model compositional changes compared with a reference point. The results obtained for the chemistry of water collected in River Arno (central-northern Italy) have open new perspectives for considering relative changes of the analysed variables and to hypothesise the relative effect of different acting physical-chemical processes, thus posing the basis for a quantitative modelling
Resumo:
Esta tesis está dividida en dos partes: en la primera parte se presentan y estudian los procesos telegráficos, los procesos de Poisson con compensador telegráfico y los procesos telegráficos con saltos. El estudio presentado en esta primera parte incluye el cálculo de las distribuciones de cada proceso, las medias y varianzas, así como las funciones generadoras de momentos entre otras propiedades. Utilizando estas propiedades en la segunda parte se estudian los modelos de valoración de opciones basados en procesos telegráficos con saltos. En esta parte se da una descripción de cómo calcular las medidas neutrales al riesgo, se encuentra la condición de no arbitraje en este tipo de modelos y por último se calcula el precio de las opciones Europeas de compra y venta.
Resumo:
The purpose of Research Theme 4 (RT4) was to advance understanding of the basic science issues at the heart of the ENSEMBLES project, focusing on the key processes that govern climate variability and change, and that determine the predictability of climate. Particular attention was given to understanding linear and non-linear feedbacks that may lead to climate surprises,and to understanding the factors that govern the probability of extreme events. Improved understanding of these issues will contribute significantly to the quantification and reduction of uncertainty in seasonal to decadal predictions and projections of climate change. RT4 exploited the ENSEMBLES integrations (stream 1) performed in RT2A as well as undertaking its own experimentation to explore key processes within the climate system. It was working at the cutting edge of problems related to climate feedbacks, the interaction between climate variability and climate change � especially how climate change pertains to extreme events, and the predictability of the climate system on a range of time-scales. The statisticalmethodologies developed for extreme event analysis are new and state-of-the-art. The RT4-coordinated experiments, which have been conducted with six different atmospheric GCMs forced by common timeinvariant sea surface temperature (SST) and sea-ice fields (removing some sources of inter-model variability), are designed to help to understand model uncertainty (rather than scenario or initial condition uncertainty) in predictions of the response to greenhouse-gas-induced warming. RT4 links strongly with RT5 on the evaluation of the ENSEMBLES prediction system and feeds back its results to RT1 to guide improvements in the Earth system models and, through its research on predictability, to steer the development of methods for initialising the ensembles
Resumo:
Associative memory networks such as Radial Basis Functions, Neurofuzzy and Fuzzy Logic used for modelling nonlinear processes suffer from the curse of dimensionality (COD), in that as the input dimension increases the parameterization, computation cost, training data requirements, etc. increase exponentially. Here a new algorithm is introduced for the construction of a Delaunay input space partitioned optimal piecewise locally linear models to overcome the COD as well as generate locally linear models directly amenable to linear control and estimation algorithms. The training of the model is configured as a new mixture of experts network with a new fast decision rule derived using convex set theory. A very fast simulated reannealing (VFSR) algorithm is utilized to search a global optimal solution of the Delaunay input space partition. A benchmark non-linear time series is used to demonstrate the new approach.