996 resultados para LOCALLY STATIONARY WAVELET PROCESSES
Resumo:
Due to their aragonitic shell, thecosome pteropods may be particularly vulnerable to ocean acidification driven by anthropogenic CO2 emissions. This applies specifically to species inhabiting Arctic surface waters that are projected to become temporarily and locally undersaturated with respect to aragonite as early as 2016. This study investigated the effects of rising partial pressure of CO2 (pCO2) and elevated temperature on pre-winter juveniles of the polar pteropod Limacina helicina. After a 29 day experiment in September/October 2009 at three different temperatures and under pCO2 scenarios projected for this century, mortality, shell degradation, shell diameter and shell increment were investigated. Temperature and pCO2 had a significant effect on mortality, but temperature was the overriding factor. Shell diameter, shell increment and shell degradation were significantly impacted by pCO2 but not by temperature. Mortality was 46% higher at 8 °C than at in situ temperature (3 °C), and 14% higher at 1100 ?atm than at 230 ?atm. Shell diameter and increment were reduced by 10 and 12% at 1100 ?atm and 230 ?atm, respectively, and shell degradation was 41% higher at elevated compared to ambient pCO2. We conclude that pre-winter juveniles will be negatively affected by both rising temperature and pCO2 which may result in a possible decline in abundance of the overwintering population, the basis for next year's reproduction.
Resumo:
One of the key steps to achieve high efficiencies in amorphous/crystalline silicon photovoltaic structures is to design low-ohmic-resistance backcontacts with good passivation in the rear part of the cell. A well known approach to achieve this goal is to use laser-fired contact (LFC) processes in which a metal layer is fired through the dielectric to define good contacts with the semiconductor. However, and despite the fact that this approach has demonstrated to be extremely successful, there is still enough room for process improvement with an appropriate optimization. In this paper, a study focused on the optimal adjustment of the irradiation parameters to produce laser-fired contacts in a-Si:H/c-Si heterojunctionsolarcells is presented. We used samples consisting of crystalline-silicon (c-Si) wafers together with a passivation layer of intrinsic hydrogenated amorphous silicon (a-Si:H(i)) deposited by plasma-enhanced chemical deposition (PECVD). Then, an aluminum layer was evaporated on both sides, the thickness of this layer varied from 0.2 to 1 μm in order to identify the optimal amount of Al required to create an appropriate contact. A q-switched Nd:YVO4laser source, λ = 532 nm, was used to locally fire the aluminum through the thin a-Si:H(i)-layers to form the LFC. The effects of laser fluences were analyzed using a comprehensive morphological and electrical characterization.
Resumo:
We establish a refined version of the Second Law of Thermodynamics for Langevin stochastic processes describing mesoscopic systems driven by conservative or non-conservative forces and interacting with thermal noise. The refinement is based on the Monge-Kantorovich optimal mass transport and becomes relevant for processes far from quasi-stationary regime. General discussion is illustrated by numerical analysis of the optimal memory erasure protocol for a model for micron-size particle manipulated by optical tweezers.
Resumo:
Hoy en día, con la evolución continua y rápida de las tecnologías de la información y los dispositivos de computación, se recogen y almacenan continuamente grandes volúmenes de datos en distintos dominios y a través de diversas aplicaciones del mundo real. La extracción de conocimiento útil de una cantidad tan enorme de datos no se puede realizar habitualmente de forma manual, y requiere el uso de técnicas adecuadas de aprendizaje automático y de minería de datos. La clasificación es una de las técnicas más importantes que ha sido aplicada con éxito a varias áreas. En general, la clasificación se compone de dos pasos principales: en primer lugar, aprender un modelo de clasificación o clasificador a partir de un conjunto de datos de entrenamiento, y en segundo lugar, clasificar las nuevas instancias de datos utilizando el clasificador aprendido. La clasificación es supervisada cuando todas las etiquetas están presentes en los datos de entrenamiento (es decir, datos completamente etiquetados), semi-supervisada cuando sólo algunas etiquetas son conocidas (es decir, datos parcialmente etiquetados), y no supervisada cuando todas las etiquetas están ausentes en los datos de entrenamiento (es decir, datos no etiquetados). Además, aparte de esta taxonomía, el problema de clasificación se puede categorizar en unidimensional o multidimensional en función del número de variables clase, una o más, respectivamente; o también puede ser categorizado en estacionario o cambiante con el tiempo en función de las características de los datos y de la tasa de cambio subyacente. A lo largo de esta tesis, tratamos el problema de clasificación desde tres perspectivas diferentes, a saber, clasificación supervisada multidimensional estacionaria, clasificación semisupervisada unidimensional cambiante con el tiempo, y clasificación supervisada multidimensional cambiante con el tiempo. Para llevar a cabo esta tarea, hemos usado básicamente los clasificadores Bayesianos como modelos. La primera contribución, dirigiéndose al problema de clasificación supervisada multidimensional estacionaria, se compone de dos nuevos métodos de aprendizaje de clasificadores Bayesianos multidimensionales a partir de datos estacionarios. Los métodos se proponen desde dos puntos de vista diferentes. El primer método, denominado CB-MBC, se basa en una estrategia de envoltura de selección de variables que es voraz y hacia delante, mientras que el segundo, denominado MB-MBC, es una estrategia de filtrado de variables con una aproximación basada en restricciones y en el manto de Markov. Ambos métodos han sido aplicados a dos problemas reales importantes, a saber, la predicción de los inhibidores de la transcriptasa inversa y de la proteasa para el problema de infección por el virus de la inmunodeficiencia humana tipo 1 (HIV-1), y la predicción del European Quality of Life-5 Dimensions (EQ-5D) a partir de los cuestionarios de la enfermedad de Parkinson con 39 ítems (PDQ-39). El estudio experimental incluye comparaciones de CB-MBC y MB-MBC con los métodos del estado del arte de la clasificación multidimensional, así como con métodos comúnmente utilizados para resolver el problema de predicción de la enfermedad de Parkinson, a saber, la regresión logística multinomial, mínimos cuadrados ordinarios, y mínimas desviaciones absolutas censuradas. En ambas aplicaciones, los resultados han sido prometedores con respecto a la precisión de la clasificación, así como en relación al análisis de las estructuras gráficas que identifican interacciones conocidas y novedosas entre las variables. La segunda contribución, referida al problema de clasificación semi-supervisada unidimensional cambiante con el tiempo, consiste en un método nuevo (CPL-DS) para clasificar flujos de datos parcialmente etiquetados. Los flujos de datos difieren de los conjuntos de datos estacionarios en su proceso de generación muy rápido y en su aspecto de cambio de concepto. Es decir, los conceptos aprendidos y/o la distribución subyacente están probablemente cambiando y evolucionando en el tiempo, lo que hace que el modelo de clasificación actual sea obsoleto y deba ser actualizado. CPL-DS utiliza la divergencia de Kullback-Leibler y el método de bootstrapping para cuantificar y detectar tres tipos posibles de cambio: en las predictoras, en la a posteriori de la clase o en ambas. Después, si se detecta cualquier cambio, un nuevo modelo de clasificación se aprende usando el algoritmo EM; si no, el modelo de clasificación actual se mantiene sin modificaciones. CPL-DS es general, ya que puede ser aplicado a varios modelos de clasificación. Usando dos modelos diferentes, el clasificador naive Bayes y la regresión logística, CPL-DS se ha probado con flujos de datos sintéticos y también se ha aplicado al problema real de la detección de código malware, en el cual los nuevos ficheros recibidos deben ser continuamente clasificados en malware o goodware. Los resultados experimentales muestran que nuestro método es efectivo para la detección de diferentes tipos de cambio a partir de los flujos de datos parcialmente etiquetados y también tiene una buena precisión de la clasificación. Finalmente, la tercera contribución, sobre el problema de clasificación supervisada multidimensional cambiante con el tiempo, consiste en dos métodos adaptativos, a saber, Locally Adpative-MB-MBC (LA-MB-MBC) y Globally Adpative-MB-MBC (GA-MB-MBC). Ambos métodos monitorizan el cambio de concepto a lo largo del tiempo utilizando la log-verosimilitud media como métrica y el test de Page-Hinkley. Luego, si se detecta un cambio de concepto, LA-MB-MBC adapta el actual clasificador Bayesiano multidimensional localmente alrededor de cada nodo cambiado, mientras que GA-MB-MBC aprende un nuevo clasificador Bayesiano multidimensional. El estudio experimental realizado usando flujos de datos sintéticos multidimensionales indica los méritos de los métodos adaptativos propuestos. ABSTRACT Nowadays, with the ongoing and rapid evolution of information technology and computing devices, large volumes of data are continuously collected and stored in different domains and through various real-world applications. Extracting useful knowledge from such a huge amount of data usually cannot be performed manually, and requires the use of adequate machine learning and data mining techniques. Classification is one of the most important techniques that has been successfully applied to several areas. Roughly speaking, classification consists of two main steps: first, learn a classification model or classifier from an available training data, and secondly, classify the new incoming unseen data instances using the learned classifier. Classification is supervised when the whole class values are present in the training data (i.e., fully labeled data), semi-supervised when only some class values are known (i.e., partially labeled data), and unsupervised when the whole class values are missing in the training data (i.e., unlabeled data). In addition, besides this taxonomy, the classification problem can be categorized into uni-dimensional or multi-dimensional depending on the number of class variables, one or more, respectively; or can be also categorized into stationary or streaming depending on the characteristics of the data and the rate of change underlying it. Through this thesis, we deal with the classification problem under three different settings, namely, supervised multi-dimensional stationary classification, semi-supervised unidimensional streaming classification, and supervised multi-dimensional streaming classification. To accomplish this task, we basically used Bayesian network classifiers as models. The first contribution, addressing the supervised multi-dimensional stationary classification problem, consists of two new methods for learning multi-dimensional Bayesian network classifiers from stationary data. They are proposed from two different points of view. The first method, named CB-MBC, is based on a wrapper greedy forward selection approach, while the second one, named MB-MBC, is a filter constraint-based approach based on Markov blankets. Both methods are applied to two important real-world problems, namely, the prediction of the human immunodeficiency virus type 1 (HIV-1) reverse transcriptase and protease inhibitors, and the prediction of the European Quality of Life-5 Dimensions (EQ-5D) from 39-item Parkinson’s Disease Questionnaire (PDQ-39). The experimental study includes comparisons of CB-MBC and MB-MBC against state-of-the-art multi-dimensional classification methods, as well as against commonly used methods for solving the Parkinson’s disease prediction problem, namely, multinomial logistic regression, ordinary least squares, and censored least absolute deviations. For both considered case studies, results are promising in terms of classification accuracy as well as regarding the analysis of the learned MBC graphical structures identifying known and novel interactions among variables. The second contribution, addressing the semi-supervised uni-dimensional streaming classification problem, consists of a novel method (CPL-DS) for classifying partially labeled data streams. Data streams differ from the stationary data sets by their highly rapid generation process and their concept-drifting aspect. That is, the learned concepts and/or the underlying distribution are likely changing and evolving over time, which makes the current classification model out-of-date requiring to be updated. CPL-DS uses the Kullback-Leibler divergence and bootstrapping method to quantify and detect three possible kinds of drift: feature, conditional or dual. Then, if any occurs, a new classification model is learned using the expectation-maximization algorithm; otherwise, the current classification model is kept unchanged. CPL-DS is general as it can be applied to several classification models. Using two different models, namely, naive Bayes classifier and logistic regression, CPL-DS is tested with synthetic data streams and applied to the real-world problem of malware detection, where the new received files should be continuously classified into malware or goodware. Experimental results show that our approach is effective for detecting different kinds of drift from partially labeled data streams, as well as having a good classification performance. Finally, the third contribution, addressing the supervised multi-dimensional streaming classification problem, consists of two adaptive methods, namely, Locally Adaptive-MB-MBC (LA-MB-MBC) and Globally Adaptive-MB-MBC (GA-MB-MBC). Both methods monitor the concept drift over time using the average log-likelihood score and the Page-Hinkley test. Then, if a drift is detected, LA-MB-MBC adapts the current multi-dimensional Bayesian network classifier locally around each changed node, whereas GA-MB-MBC learns a new multi-dimensional Bayesian network classifier from scratch. Experimental study carried out using synthetic multi-dimensional data streams shows the merits of both proposed adaptive methods.
Resumo:
La teledetección o percepción remota (remote sensing) es la ciencia que abarca la obtención de información (espectral, espacial, temporal) sobre un objeto, área o fenómeno a través del análisis de datos adquiridos por un dispositivo que no está en contacto con el elemento estudiado. Los datos obtenidos a partir de la teledetección para la observación de la superficie terrestre comúnmente son imágenes, que se caracterizan por contar con un sinnúmero de aplicaciones que están en continua evolución, por lo cual para solventar los constantes requerimientos de nuevas aplicaciones a menudo se proponen nuevos algoritmos que mejoran o facilitan algún proceso en particular. Para el desarrollo de dichos algoritmos, es preciso hacer uso de métodos matemáticos que permitan la manipulación de la información con algún fin específico. Dentro de estos métodos, el análisis multi-resolución se caracteriza por permitir analizar una señal en diferentes escalas, lo que facilita trabajar con datos que puedan tener resoluciones diferentes, tal es el caso de las imágenes obtenidas mediante teledetección. Una de las alternativas para la implementación de análisis multi-resolución es la Transformada Wavelet Compleja de Doble Árbol (DT-CWT). Esta transformada se implementa a partir de dos filtros reales y se caracteriza por presentar invariancia a traslaciones, precio a pagar por su característica de no ser críticamente muestreada. A partir de las características de la DT-CWT se propone su uso en el diseño de algoritmos de procesamiento de imagen, particularmente imágenes de teledetección. Estos nuevos algoritmos de procesamiento digital de imágenes de teledetección corresponden particularmente a fusión y detección de cambios. En este contexto esta tesis presenta tres algoritmos principales aplicados a fusión, evaluación de fusión y detección de cambios en imágenes. Para el caso de fusión de imágenes, se presenta un esquema general que puede ser utilizado con cualquier algoritmo de análisis multi-resolución; este algoritmo parte de la implementación mediante DT-CWT para luego extenderlo a un método alternativo, el filtro bilateral. En cualquiera de los dos casos la metodología implica que la inyección de componentes pueda realizarse mediante diferentes alternativas. En el caso del algoritmo de evaluación de fusión se presenta un nuevo esquema que hace uso de procesos de clasificación, lo que permite evaluar los resultados del proceso de fusión de forma individual para cada tipo de cobertura de uso de suelo que se defina en el proceso de evaluación. Esta metodología permite complementar los procesos de evaluación tradicionales y puede facilitar el análisis del impacto de la fusión sobre determinadas clases de suelo. Finalmente, los algoritmos de detección de cambios propuestos abarcan dos enfoques. El primero está orientado a la obtención de mapas de sequía en datos multi-temporales a partir de índices espectrales. El segundo enfoque propone la utilización de un índice global de calidad espectral como filtro espacial. La utilización de dicho filtro facilita la comparación espectral global entre dos imágenes, esto unido a la utilización de umbrales, conlleva a la obtención de imágenes diferencia que contienen la información de cambio. ABSTRACT Remote sensing is a science relates to information gathering (spectral, spatial, temporal) about an object, area or phenomenon, through the analysis of data acquired by a device that is not in contact with the studied item. In general, data obtained from remote sensing to observe the earth’s surface are images, which are characterized by having a number of applications that are constantly evolving. Therefore, to solve the constant requirements of applications, new algorithms are proposed to improve or facilitate a particular process. With the purpose of developing these algorithms, each application needs mathematical methods, such as the multiresolution analysis which allows to analyze a signal at different scales. One of the options is the Dual Tree Complex Wavelet Transform (DT-CWT) which is implemented from two real filters and is characterized by invariance to translations. Among the advantages of this transform is its successful application in image fusion and change detection areas. In this regard, this thesis presents three algorithms applied to image fusion, assessment for image fusion and change detection in multitemporal images. For image fusion, it is presented a general outline that can be used with any multiresolution analysis technique; this algorithm is proposed at first with DT-CWT and then extends to an alternative method, the bilateral filter. In either case the method involves injection of components by various means. For fusion assessment, the proposal is focused on a scheme that uses classification processes, which allows evaluating merger results individually for each type of land use coverage that is defined in evaluation process. This methodology allows complementing traditional assessment processes and can facilitate impact analysis of the merger on certain kinds of soil. Finally, two approaches of change detection algorithms are included. The first is aimed at obtaining drought maps in multitemporal data from spectral indices. The second one takes a global index of spectral quality as a spatial filter. The use of this filter facilitates global spectral comparison between two images and by means of thresholding, allows imaging containing change information.
Resumo:
The trithorax gene family contains members implicated in the control of transcription, development, chromosome structure, and human leukemia. A feature shared by some family members, and by other proteins that function in chromatin-mediated transcriptional regulation, is the presence of a 130- to 140-amino acid motif dubbed the SET or Tromo domain. Here we present analysis of SET1, a yeast member of the trithorax gene family that was identified by sequence inspection to encode a 1080-amino acid protein with a C-terminal SET domain. In addition to its SET domain, which is 40–50% identical to those previously characterized, SET1 also shares dispersed but significant similarity to Drosophila and human trithorax homologues. To understand SET1 function(s), we created a null mutant. Mutant strains, although viable, are defective in transcriptional silencing of the silent mating-type loci and telomeres. The telomeric silencing defect is rescued not only by full-length episomal SET1 but also by the conserved SET domain of SET1. set1 mutant strains display other phenotypes including morphological abnormalities, stationary phase defects, and growth and sporulation defects. Candidate genes that may interact with SET1 include those with functions in transcription, growth, and cell cycle control. These data suggest that yeast SET1, like its SET domain counterparts in other organisms, functions in diverse biological processes including transcription and chromatin structure.
Resumo:
Humans affect biodiversity at the genetic, species, community, and ecosystem levels. This impact on genetic diversity is critical, because genetic diversity is the raw material of evolutionary change, including adaptation and speciation. Two forces affecting genetic variation are genetic drift (which decreases genetic variation within but increases genetic differentiation among local populations) and gene flow (which increases variation within but decreases differentiation among local populations). Humans activities often augment drift and diminish gene flow for many species, which reduces genetic variation in local populations and prevents the spread of adaptive complexes outside their population of origin, thereby disrupting adaptive processes both locally and globally within a species. These impacts are illustrated with collared lizards (Crotaphytus collaris) in the Missouri Ozarks. Forest fire suppression has reduced habitat and disrupted gene flow in this lizard, thereby altering the balance toward drift and away from gene flow. This balance can be restored by managed landscape burns. Some have argued that, although human-induced fragmentation disrupts adaptation, it will also ultimately produce new species through founder effects. However, population genetic theory and experiments predict that most fragmentation events caused by human activities will facilitate not speciation, but local extinction. Founder events have played an important role in the macroevolution of certain groups, but only when ecological opportunities are expanding rather than contracting. The general impact of human activities on genetic diversity disrupts or diminishes the capacity for adaptation, speciation, and macroevolutionary change. This impact will ultimately diminish biodiversity at all levels.
Resumo:
Early metazoan development is programmed by maternal mRNAs inherited by the egg at the time of fertilization. These mRNAs are not translated en masse at any one time or at any one place, but instead their expression is regulated both temporally and spatially. Recent evidence has shown that one maternal mRNA, cyclin B1, is concentrated on mitotic spindles in the early Xenopus embryo, where its translation is controlled by CPEB (cytoplasmic polyadenylation element binding protein), a sequence-specific RNA binding protein. Disruption of the spindle-associated translation of this mRNA results in a morphologically abnormal mitotic apparatus and inhibited cell division. Mammalian neurons, particularly in the synapto-dendritic compartment, also contain localized mRNAs such as that encoding α-CaMKII. Here, synaptic activation drives local translation, an event that is involved in synaptic plasticity and possibly long-term memory storage. Synaptic translation of α-CaMKII mRNA also appears to be controlled by CPEB, which is enriched in the postsynaptic density. Therefore, CPEB-controlled local translation may influence such seemingly disparate processes as the cell cycle and synaptic plasticity.
Resumo:
Several microbial systems have been shown to yield advantageous mutations in slowly growing or nongrowing cultures. In one assay system, the stationary-phase mutation mechanism differs from growth-dependent mutation, demonstrating that the two are different processes. This system assays reversion of a lac frameshift allele on an F′ plasmid in Escherichia coli. The stationary-phase mutation mechanism at lac requires recombination proteins of the RecBCD double-strand-break repair system and the inducible error-prone DNA polymerase IV, and the mutations are mostly −1 deletions in small mononucleotide repeats. This mutation mechanism is proposed to occur by DNA polymerase errors made during replication primed by recombinational double-strand-break repair. It has been suggested that this mechanism is confined to the F plasmid. However, the cells that acquire the adaptive mutations show hypermutation of unrelated chromosomal genes, suggesting that chromosomal sites also might experience recombination protein-dependent stationary-phase mutation. Here we test directly whether the stationary-phase mutations in the bacterial chromosome also occur via a recombination protein- and pol IV-dependent mechanism. We describe an assay for chromosomal mutation in cells carrying the F′ lac. We show that the chromosomal mutation is recombination protein- and pol IV-dependent and also is associated with general hypermutation. The data indicate that, at least in these male cells, recombination protein-dependent stationary-phase mutation is a mechanism of general inducible genetic change capable of affecting genes in the bacterial chromosome.
Resumo:
Let Q be a stable and conservative Q-matrix over a countable state space S consisting of an irreducible class C and a single absorbing state 0 that is accessible from C. Suppose that Q admits a finite mu-subinvariant measure in on C. We derive necessary and sufficient conditions for there to exist a Q-process for which m is mu-invariant on C, as well as a necessary condition for the uniqueness of such a process.
Resumo:
To evaluate the extent of human impact on a pristine Antarctic environment, natural baseline levels of trace metals have been established in the basement rocks of the Larsemann Hills, East Antarctica. From a mineralogical and geochemical point of view the Larsemann Hills basement is relatively homogeneous, and contains high levels of Pb, Th and U. These may become soluble during the relatively mild Antarctic summer and be transported to lake waters by surface and subsurface melt water. Melt waters may also be locally enriched in V, Cr, Co, Ni, Zn and Sri derived from weathering of metabasite pods. With a few notable exceptions, the trace metal concentrations measured in the Larsemann Hills lake waters can be entirely accounted for by natural processes such as sea spray and surface melt water input. Thus, the amount of trace metals released by weathering of basement lithologies and dispersed into the Larsemann Hills environment, and presumably in similar Antarctic environments, is, in general, not negligible, and may locally be substantial. The Larsemann Hills sediments are coarse-grained and contain minute amounts of clay-size particles, although human activities have contributed to the generation of fine-grained material at the most impacted sites. Irrespective of their origin, these small amounts of fine-grained clastic sediments have a relatively small surface area and charge, and are not as effective metal sinks as the abundant, thick cyanobacterial algal mats that cover the lake floors. Thus, the concentration of trace metals in the Larsemann Hills lake waters is regulated by biological activity and thawing-freezing cycles, rather than by the type and amount of clastic sediment supply. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
This Letter addresses image segmentation via a generative model approach. A Bayesian network (BNT) in the space of dyadic wavelet transform coefficients is introduced to model texture images. The model is similar to a Hidden Markov model (HMM), but with non-stationary transitive conditional probability distributions. It is composed of discrete hidden variables and observable Gaussian outputs for wavelet coefficients. In particular, the Gabor wavelet transform is considered. The introduced model is compared with the simplest joint Gaussian probabilistic model for Gabor wavelet coefficients for several textures from the Brodatz album [1]. The comparison is based on cross-validation and includes probabilistic model ensembles instead of single models. In addition, the robustness of the models to cope with additive Gaussian noise is investigated. We further study the feasibility of the introduced generative model for image segmentation in the novelty detection framework [2]. Two examples are considered: (i) sea surface pollution detection from intensity images and (ii) image segmentation of the still images with varying illumination across the scene.
Resumo:
Companies must not see e-Business as a panacea but instead assess the specific impact of implementing e-Business on their business from both an internal and external perspective. E-Business is promoted as being able to increase the speed of response and reduce costs locally but these benefits must be assessed for the wider business rather than as local improvements. This paper argues that any assessment must include quantitative analysis that covers the physical as well as the information flows within a business. It is noted that as business processes are e-enabled their structure does not significantly change and it is only by the use of modelling techniques that the operational impact can be ascertained. The paper reviews techniques that are appropriate for this type of analysis as well as specific modelling tools and applications. Through this review a set of requirements for e-Business process modelling is derived.
Resumo:
This paper presents some forecasting techniques for energy demand and price prediction, one day ahead. These techniques combine wavelet transform (WT) with fixed and adaptive machine learning/time series models (multi-layer perceptron (MLP), radial basis functions, linear regression, or GARCH). To create an adaptive model, we use an extended Kalman filter or particle filter to update the parameters continuously on the test set. The adaptive GARCH model is a new contribution, broadening the applicability of GARCH methods. We empirically compared two approaches of combining the WT with prediction models: multicomponent forecasts and direct forecasts. These techniques are applied to large sets of real data (both stationary and non-stationary) from the UK energy markets, so as to provide comparative results that are statistically stronger than those previously reported. The results showed that the forecasting accuracy is significantly improved by using the WT and adaptive models. The best models on the electricity demand/gas price forecast are the adaptive MLP/GARCH with the multicomponent forecast; their MSEs are 0.02314 and 0.15384 respectively.
Resumo:
This thesis was focused on theoretical models of synchronization to cortical dynamics as measured by magnetoencephalography (MEG). Dynamical systems theory was used in both identifying relevant variables for brain coordination and also in devising methods for their quantification. We presented a method for studying interactions of linear and chaotic neuronal sources using MEG beamforming techniques. We showed that such sources can be accurately reconstructed in terms of their location, temporal dynamics and possible interactions. Synchronization in low-dimensional nonlinear systems was studied to explore specific correlates of functional integration and segregation. In the case of interacting dissimilar systems, relevant coordination phenomena involved generalized and phase synchronization, which were often intermittent. Spatially-extended systems were then studied. For locally-coupled dissimilar systems, as in the case of cortical columns, clustering behaviour occurred. Synchronized clusters emerged at different frequencies and their boundaries were marked through oscillation death. The macroscopic mean field revealed sharp spectral peaks at the frequencies of the clusters and broader spectral drops at their boundaries. These results question existing models of Event Related Synchronization and Desynchronization. We re-examined the concept of the steady-state evoked response following an AM stimulus. We showed that very little variability in the AM following response could be accounted by system noise. We presented a methodology for detecting local and global nonlinear interactions from MEG data in order to account for residual variability. We found crosshemispheric nonlinear interactions of ongoing cortical rhythms concurrent with the stimulus and interactions of these rhythms with the following AM responses. Finally, we hypothesized that holistic spatial stimuli would be accompanied by the emergence of clusters in primary visual cortex resulting in frequency-specific MEG oscillations. Indeed, we found different frequency distributions in induced gamma oscillations for different spatial stimuli, which was suggestive of temporal coding of these spatial stimuli. Further, we addressed the bursting character of these oscillations, which was suggestive of intermittent nonlinear dynamics. However, we did not observe the characteristic-3/2 power-law scaling in the distribution of interburst intervals. Further, this distribution was only seldom significantly different to the one obtained in surrogate data, where nonlinear structure was destroyed. In conclusion, the work presented in this thesis suggests that advances in dynamical systems theory in conjunction with developments in magnetoencephalography may facilitate a mapping between levels of description int he brain. this may potentially represent a major advancement in neuroscience.