990 resultados para Linear Series
Resumo:
Remote sensing instruments are key players to map land surface temperature (LST) at large temporal and spatial scales. In this paper, we present how we combine passive microwave and thermal infrared data to estimate LST during summer snow-free periods over northern high latitudes. The methodology is based on the SSM/I-SSMIS 37 GHz measurements at both vertical and horizontal polarizations on a 25 km × 25 km grid size. LST is retrieved from brightness temperatures introducing an empirical linear relationship between emissivities at both polarizations as described in Royer and Poirier (2010). This relationship is calibrated at pixel scale, using cloud-free independent LST data from MODIS instruments. The SSM/I-SSMIS and MODIS data are synchronized by fitting a diurnal cycle model built on skin temperature reanalysis provided by the European Centre for Medium-Range Weather Forecasts (ECMWF). The resulting temperature dataset is provided at 25 km scale and at an hourly time step during the ten-year analysis period (2000-2011). This new product was locally evaluated at five experimental sites of the EU-PAGE21 project against air temperature measurements and meteorological model reanalysis, and compared to the MODIS LST product at both local and circumpolar scale. The results giving a mean RMSE of the order of 2.2 K demonstrate the usefulness of the microwave product, which is unaffected by clouds as opposed to thermal infrared products and offers a better resolution compared to model reanalysis.
Resumo:
This paper presents a theoretical analysis and an optimization method for envelope amplifier. Highly efficient envelope amplifiers based on a switching converter in parallel or series with a linear regulator have been analyzed and optimized. The results of the optimization process have been shown and these two architectures are compared regarding their complexity and efficiency. The optimization method that is proposed is based on the previous knowledge about the transmitted signal type (OFDM, WCDMA...) and it can be applied to any signal type as long as the envelope probability distribution is known. Finally, it is shown that the analyzed architectures have an inherent efficiency limit.
Resumo:
A linear method is developed for solving the nonlinear differential equations of a lumped-parameter thermal model of a spacecraft moving in a closed orbit. This method, based on perturbation theory, is compared with heuristic linearizations of the same equations. The essential feature of the linear approach is that it provides a decomposition in thermal modes, like the decomposition of mechanical vibrations in normal modes. The stationary periodic solution of the linear equations can be alternately expressed as an explicit integral or as a Fourier series. This method is applied to a minimal thermal model of a satellite with ten isothermal parts (nodes), and the method is compared with direct numerical integration of the nonlinear equations. The computational complexity of this method is briefly studied for general thermal models of orbiting spacecraft, and it is concluded that it is certainly useful for reduced models and conceptual design but it can also be more efficient than the direct integration of the equations for large models. The results of the Fourier series computations for the ten-node satellite model show that the periodic solution at the second perturbative order is sufficiently accurate.
Resumo:
High frequency dc-dc switching converters are used as envelope amplifiers in RF transmitters. The dc-dc converter should operate at very high frequency to track an envelope in the MHz range to supply the power amplifier. One of the circuits suitable for this application is a hybrid topology composed of a switched converter and a linear regulator in series that work together to adjust the output voltage to track the envelope with accuracy. This topology can take advantage of the reduced slew-rate technique where switching dc-dc converter provides the RF envelope with limited slew rate in order to avoid high switching frequency and high power losses, while the linear regulator performs fine adjustment in order to obtain the exact replica of the RF envelope. The combination of this control technique with this topology is proposed in this paper. Envelopes with different bandwidth will be considered to optimize the efficiency of the dc-dc converter. The calculations and experiments have been done to track a 2MHz envelope in the range 0-12V for an EER RF transmitter.
Resumo:
Machine and Statistical Learning techniques are used in almost all online advertisement systems. The problem of discovering which content is more demanded (e.g. receive more clicks) can be modeled as a multi-armed bandit problem. Contextual bandits (i.e., bandits with covariates, side information or associative reinforcement learning) associate, to each specific content, several features that define the “context” in which it appears (e.g. user, web page, time, region). This problem can be studied in the stochastic/statistical setting by means of the conditional probability paradigm using the Bayes’ theorem. However, for very large contextual information and/or real-time constraints, the exact calculation of the Bayes’ rule is computationally infeasible. In this article, we present a method that is able to handle large contextual information for learning in contextual-bandits problems. This method was tested in the Challenge on Yahoo! dataset at ICML2012’s Workshop “new Challenges for Exploration & Exploitation 3”, obtaining the second place. Its basic exploration policy is deterministic in the sense that for the same input data (as a time-series) the same results are obtained. We address the deterministic exploration vs. exploitation issue, explaining the way in which the proposed method deterministically finds an effective dynamic trade-off based solely in the input-data, in contrast to other methods that use a random number generator.
Resumo:
A series of motion compensation algorithms is run on the challenge data including methods that optimize only a linear transformation, or a non-linear transformation, or both – first a linear and then a non-linear transformation. Methods that optimize a linear transformation run an initial segmentation of the area of interest around the left myocardium by means of an independent component analysis (ICA) (ICA-*). Methods that optimize non-linear transformations may run directly on the full images, or after linear registration. Non-linear motion compensation approaches applied include one method that only registers pairs of images in temporal succession (SERIAL), one method that registers all image to one common reference (AllToOne), one method that was designed to exploit quasi-periodicity in free breathing acquired image data and was adapted to also be usable to image data acquired with initial breath-hold (QUASI-P), a method that uses ICA to identify the motion and eliminate it (ICA-SP), and a method that relies on the estimation of a pseudo ground truth (PG) to guide the motion compensation.
Resumo:
In the analysis of heart rate variability (HRV) are used temporal series that contains the distances between successive heartbeats in order to assess autonomic regulation of the cardiovascular system. These series are obtained from the electrocardiogram (ECG) signal analysis, which can be affected by different types of artifacts leading to incorrect interpretations in the analysis of the HRV signals. Classic approach to deal with these artifacts implies the use of correction methods, some of them based on interpolation, substitution or statistical techniques. However, there are few studies that shows the accuracy and performance of these correction methods on real HRV signals. This study aims to determine the performance of some linear and non-linear correction methods on HRV signals with induced artefacts by quantification of its linear and nonlinear HRV parameters. As part of the methodology, ECG signals of rats measured using the technique of telemetry were used to generate real heart rate variability signals without any error. In these series were simulated missing points (beats) in different quantities in order to emulate a real experimental situation as accurately as possible. In order to compare recovering efficiency, deletion (DEL), linear interpolation (LI), cubic spline interpolation (CI), moving average window (MAW) and nonlinear predictive interpolation (NPI) were used as correction methods for the series with induced artifacts. The accuracy of each correction method was known through the results obtained after the measurement of the mean value of the series (AVNN), standard deviation (SDNN), root mean square error of the differences between successive heartbeats (RMSSD), Lomb\'s periodogram (LSP), Detrended Fluctuation Analysis (DFA), multiscale entropy (MSE) and symbolic dynamics (SD) on each HRV signal with and without artifacts. The results show that, at low levels of missing points the performance of all correction techniques are very similar with very close values for each HRV parameter. However, at higher levels of losses only the NPI method allows to obtain HRV parameters with low error values and low quantity of significant differences in comparison to the values calculated for the same signals without the presence of missing points.
Resumo:
O presente estudo considera a aplicação do modelo SISAGUA de simulação matemática e de otimização para a operação de sistemas de reservatórios integrados em sistemas complexos para o abastecimento de água. O SISAGUA utiliza a programação não linear inteira mista (PNLIM) com os objetivos de evitar ou minimizar racionamentos, equilibrar a distribuição dos armazenamentos em sistemas com múltiplos reservatórios e minimizar os custos de operação. A metodologia de otimização foi aplicada para o sistema produtor de água da Região Metropolitana de São Paulo (RMSP), que enfrenta a crise hídrica diante de um cenário de estiagem em 2013-2015, o pior na série histórica dos últimos 85 anos. Trata-se de uma região com 20,4 milhões de habitantes. O sistema é formado por oito sistemas produtores parcialmente integrados e operados pela Sabesp (Companhia de Saneamento do Estado de São Paulo). A RMSP é uma região com alta densidade demográfica, localizada na Bacia Hidrográfica do Alto Tietê e caracterizada pela baixa disponibilidade hídrica per capita. Foi abordada a possibilidade de considerar a evaporação durante as simulações, e a aplicação de uma regra de racionamento contínua nos reservatórios, que transforma a formulação do problema em programação não linear (PNL). A evaporação se mostrou pouco representativa em relação a vazão de atendimento à demanda, com cerca de 1% da vazão. Se por um lado uma vazão desta magnitude pode contribuir em um cenário crítico, por outro essa ordem de grandeza pode ser comparada às incertezas de medições ou previsões de afluências. O teste de sensibilidade das diferentes taxas de racionamento em função do volume armazenado permite analisar o tempo de resposta de cada sistema. A variação do tempo de recuperação, porém, não se mostrou muito significativo.
Resumo:
The Lomb periodogram has been traditionally a tool that allows us to elucidate if a frequency turns out to be important for explaining the behaviour of a given time series. Many linear and nonlinear reiterative harmonic processes that are used for studying the spectral content of a time series take into account this periodogram in order to avoid including spurious frequencies in their models due to the leakage problem of energy from one frequency to others. However, the estimation of the periodogram requires long computation time that makes the harmonic analysis slower when we deal with certain time series. Here we propose an algorithm that accelerates the extraction of the most remarkable frequencies from the periodogram, avoiding its whole estimation of the harmonic process at each iteration. This algorithm allows the user to perform a specific analysis of a given scalar time series. As a result, we obtain a functional model made of (1) a trend component, (2) a linear combination of Fourier terms, and (3) the so-called mixed secular terms by reducing the computation time of the estimation of the periodogram.
Resumo:
La plupart des modèles en statistique classique repose sur une hypothèse sur la distribution des données ou sur une distribution sous-jacente aux données. La validité de cette hypothèse permet de faire de l’inférence, de construire des intervalles de confiance ou encore de tester la fiabilité du modèle. La problématique des tests d’ajustement vise à s’assurer de la conformité ou de la cohérence de l’hypothèse avec les données disponibles. Dans la présente thèse, nous proposons des tests d’ajustement à la loi normale dans le cadre des séries chronologiques univariées et vectorielles. Nous nous sommes limités à une classe de séries chronologiques linéaires, à savoir les modèles autorégressifs à moyenne mobile (ARMA ou VARMA dans le cas vectoriel). Dans un premier temps, au cas univarié, nous proposons une généralisation du travail de Ducharme et Lafaye de Micheaux (2004) dans le cas où la moyenne est inconnue et estimée. Nous avons estimé les paramètres par une méthode rarement utilisée dans la littérature et pourtant asymptotiquement efficace. En effet, nous avons rigoureusement montré que l’estimateur proposé par Brockwell et Davis (1991, section 10.8) converge presque sûrement vers la vraie valeur inconnue du paramètre. De plus, nous fournissons une preuve rigoureuse de l’inversibilité de la matrice des variances et des covariances de la statistique de test à partir de certaines propriétés d’algèbre linéaire. Le résultat s’applique aussi au cas où la moyenne est supposée connue et égale à zéro. Enfin, nous proposons une méthode de sélection de la dimension de la famille d’alternatives de type AIC, et nous étudions les propriétés asymptotiques de cette méthode. L’outil proposé ici est basé sur une famille spécifique de polynômes orthogonaux, à savoir les polynômes de Legendre. Dans un second temps, dans le cas vectoriel, nous proposons un test d’ajustement pour les modèles autorégressifs à moyenne mobile avec une paramétrisation structurée. La paramétrisation structurée permet de réduire le nombre élevé de paramètres dans ces modèles ou encore de tenir compte de certaines contraintes particulières. Ce projet inclut le cas standard d’absence de paramétrisation. Le test que nous proposons s’applique à une famille quelconque de fonctions orthogonales. Nous illustrons cela dans le cas particulier des polynômes de Legendre et d’Hermite. Dans le cas particulier des polynômes d’Hermite, nous montrons que le test obtenu est invariant aux transformations affines et qu’il est en fait une généralisation de nombreux tests existants dans la littérature. Ce projet peut être vu comme une généralisation du premier dans trois directions, notamment le passage de l’univarié au multivarié ; le choix d’une famille quelconque de fonctions orthogonales ; et enfin la possibilité de spécifier des relations ou des contraintes dans la formulation VARMA. Nous avons procédé dans chacun des projets à une étude de simulation afin d’évaluer le niveau et la puissance des tests proposés ainsi que de les comparer aux tests existants. De plus des applications aux données réelles sont fournies. Nous avons appliqué les tests à la prévision de la température moyenne annuelle du globe terrestre (univarié), ainsi qu’aux données relatives au marché du travail canadien (bivarié). Ces travaux ont été exposés à plusieurs congrès (voir par exemple Tagne, Duchesne et Lafaye de Micheaux (2013a, 2013b, 2014) pour plus de détails). Un article basé sur le premier projet est également soumis dans une revue avec comité de lecture (Voir Duchesne, Lafaye de Micheaux et Tagne (2016)).
Resumo:
La plupart des modèles en statistique classique repose sur une hypothèse sur la distribution des données ou sur une distribution sous-jacente aux données. La validité de cette hypothèse permet de faire de l’inférence, de construire des intervalles de confiance ou encore de tester la fiabilité du modèle. La problématique des tests d’ajustement vise à s’assurer de la conformité ou de la cohérence de l’hypothèse avec les données disponibles. Dans la présente thèse, nous proposons des tests d’ajustement à la loi normale dans le cadre des séries chronologiques univariées et vectorielles. Nous nous sommes limités à une classe de séries chronologiques linéaires, à savoir les modèles autorégressifs à moyenne mobile (ARMA ou VARMA dans le cas vectoriel). Dans un premier temps, au cas univarié, nous proposons une généralisation du travail de Ducharme et Lafaye de Micheaux (2004) dans le cas où la moyenne est inconnue et estimée. Nous avons estimé les paramètres par une méthode rarement utilisée dans la littérature et pourtant asymptotiquement efficace. En effet, nous avons rigoureusement montré que l’estimateur proposé par Brockwell et Davis (1991, section 10.8) converge presque sûrement vers la vraie valeur inconnue du paramètre. De plus, nous fournissons une preuve rigoureuse de l’inversibilité de la matrice des variances et des covariances de la statistique de test à partir de certaines propriétés d’algèbre linéaire. Le résultat s’applique aussi au cas où la moyenne est supposée connue et égale à zéro. Enfin, nous proposons une méthode de sélection de la dimension de la famille d’alternatives de type AIC, et nous étudions les propriétés asymptotiques de cette méthode. L’outil proposé ici est basé sur une famille spécifique de polynômes orthogonaux, à savoir les polynômes de Legendre. Dans un second temps, dans le cas vectoriel, nous proposons un test d’ajustement pour les modèles autorégressifs à moyenne mobile avec une paramétrisation structurée. La paramétrisation structurée permet de réduire le nombre élevé de paramètres dans ces modèles ou encore de tenir compte de certaines contraintes particulières. Ce projet inclut le cas standard d’absence de paramétrisation. Le test que nous proposons s’applique à une famille quelconque de fonctions orthogonales. Nous illustrons cela dans le cas particulier des polynômes de Legendre et d’Hermite. Dans le cas particulier des polynômes d’Hermite, nous montrons que le test obtenu est invariant aux transformations affines et qu’il est en fait une généralisation de nombreux tests existants dans la littérature. Ce projet peut être vu comme une généralisation du premier dans trois directions, notamment le passage de l’univarié au multivarié ; le choix d’une famille quelconque de fonctions orthogonales ; et enfin la possibilité de spécifier des relations ou des contraintes dans la formulation VARMA. Nous avons procédé dans chacun des projets à une étude de simulation afin d’évaluer le niveau et la puissance des tests proposés ainsi que de les comparer aux tests existants. De plus des applications aux données réelles sont fournies. Nous avons appliqué les tests à la prévision de la température moyenne annuelle du globe terrestre (univarié), ainsi qu’aux données relatives au marché du travail canadien (bivarié). Ces travaux ont été exposés à plusieurs congrès (voir par exemple Tagne, Duchesne et Lafaye de Micheaux (2013a, 2013b, 2014) pour plus de détails). Un article basé sur le premier projet est également soumis dans une revue avec comité de lecture (Voir Duchesne, Lafaye de Micheaux et Tagne (2016)).
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
We demonstrate that the process of generating smooth transitions Call be viewed as a natural result of the filtering operations implied in the generation of discrete-time series observations from the sampling of data from an underlying continuous time process that has undergone a process of structural change. In order to focus discussion, we utilize the problem of estimating the location of abrupt shifts in some simple time series models. This approach will permit its to address salient issues relating to distortions induced by the inherent aggregation associated with discrete-time sampling of continuous time processes experiencing structural change, We also address the issue of how time irreversible structures may be generated within the smooth transition processes. (c) 2005 Elsevier Inc. All rights reserved.
Resumo:
Small-angle neutron scattering measurements on a series of monodisperse linear entangled polystyrene melts in nonlinear flow through an abrupt 4:1 contraction have been made. Clear signatures of melt deformation and subsequent relaxation can be observed in the scattering patterns, which were taken along the centerline. These data are compared with the predictions of a recently derived molecular theory. Two levels of molecular theory are used: a detailed equation describing the evolution of molecular structure over all length scales relevant to the scattering data and a simplified version of the model, which is suitable for finite element computations. The velocity field for the complex melt flow is computed using the simplified model and scattering predictions are made by feeding these flow histories into the detailed model. The modeling quantitatively captures the full scattering intensity patterns over a broad range of data with independent variation of position within the contraction geometry, bulk flow rate and melt molecular weight. The study provides a strong, quantitative validation of current theoretical ideas concerning the microscopic dynamics of entangled polymers which builds upon existing comparisons with nonlinear mechanical stress data. Furthermore, we are able to confirm the appreciable length scale dependence of relaxation in polymer melts and highlight some wider implications of this phenomenon.
Resumo:
In the Bayesian framework, predictions for a regression problem are expressed in terms of a distribution of output values. The mode of this distribution corresponds to the most probable output, while the uncertainty associated with the predictions can conveniently be expressed in terms of error bars. In this paper we consider the evaluation of error bars in the context of the class of generalized linear regression models. We provide insights into the dependence of the error bars on the location of the data points and we derive an upper bound on the true error bars in terms of the contributions from individual data points which are themselves easily evaluated.