924 resultados para non-stationary loads
Resumo:
Although financial theory rests heavily upon the assumption that asset returns are normally distributed, value indices of commercial real estate display significant departures from normality. In this paper, we apply and compare the properties of two recently proposed regime switching models for value indices of commercial real estate in the US and the UK, both of which relax the assumption that observations are drawn from a single distribution with constant mean and variance. Statistical tests of the models' specification indicate that the Markov switching model is better able to capture the non-stationary features of the data than the threshold autoregressive model, although both represent superior descriptions of the data than the models that allow for only one state. Our results have several implications for theoretical models and empirical research in finance.
Resumo:
Identifying predictability and the corresponding sources for the western North Pacific (WNP) summer climate in the case of non-stationary teleconnections during recent decades benefits for further improvements of long-range prediction on the WNP and East Asian summers. In the past few decades, pronounced increases on the summer sea surface temperature (SST) and associated interannual variability are observed over the tropical Indian Ocean and eastern Pacific around the late 1970s and over the Maritime Continent and western–central Pacific around the early 1990s. These increases are associated with significant enhancements of the interannual variability for the lower-tropospheric wind over the WNP. In this study, we further assess interdecadal changes on the seasonal prediction of the WNP summer anomalies, using May-start retrospective forecasts from the ENSEMBLES multi-model project in the period 1960–2005. It is found that prediction of the WNP summer anomalies exhibits an interdecadal shift with higher prediction skills since the late 1970s, particularly after the early 1990s. Improvements of the prediction skills for SSTs after the late 1970s are mainly found around tropical Indian Ocean and the WNP. The better prediction of the WNP after the late 1970s may arise mainly from the improvement of the SST prediction around the tropical eastern Indian Ocean. The close teleconnections between the tropical eastern Indian Ocean and WNP summer variability work both in the model predictions and observations. After the early 1990s, on the other hand, the improvements are detected mainly around the South China Sea and Philippines for the lower-tropospheric zonal wind and precipitation anomalies, associating with a better description of the SST anomalies around the Maritime Continent. A dipole SST pattern over the Maritime Continent and the central equatorial Pacific Ocean is closely related to the WNP summer anomalies after the early 1990s. This teleconnection mode is quite predictable, which is realistically reproduced by the models, presenting more predictable signals to the WNP summer climate after the early 1990s.
Resumo:
Lagged correlation analysis is often used to infer intraseasonal dynamical effects but is known to be affected by non-stationarity. We highlight a pronounced quasi-two-year peak in the anomalous zonal wind and eddy momentum flux convergence power spectra in the Southern Hemisphere, which is prima facie evidence for non-stationarity. We then investigate the consequences of this non-stationarity for the Southern Annular Mode and for eddy momentum flux convergence. We argue that positive lagged correlations previously attributed to the existence of an eddy feedback are more plausibly attributed to non-stationary interannual variability external to any potential feedback process in the mid-latitude troposphere. The findings have implications for the diagnosis of feedbacks in both models and re-analysis data as well as for understanding the mechanisms underlying variations in the zonal wind.
Resumo:
When modeling real-world decision-theoretic planning problems in the Markov Decision Process (MDP) framework, it is often impossible to obtain a completely accurate estimate of transition probabilities. For example, natural uncertainty arises in the transition specification due to elicitation of MOP transition models from an expert or estimation from data, or non-stationary transition distributions arising from insufficient state knowledge. In the interest of obtaining the most robust policy under transition uncertainty, the Markov Decision Process with Imprecise Transition Probabilities (MDP-IPs) has been introduced to model such scenarios. Unfortunately, while various solution algorithms exist for MDP-IPs, they often require external calls to optimization routines and thus can be extremely time-consuming in practice. To address this deficiency, we introduce the factored MDP-IP and propose efficient dynamic programming methods to exploit its structure. Noting that the key computational bottleneck in the solution of factored MDP-IPs is the need to repeatedly solve nonlinear constrained optimization problems, we show how to target approximation techniques to drastically reduce the computational overhead of the nonlinear solver while producing bounded, approximately optimal solutions. Our results show up to two orders of magnitude speedup in comparison to traditional ""flat"" dynamic programming approaches and up to an order of magnitude speedup over the extension of factored MDP approximate value iteration techniques to MDP-IPs while producing the lowest error of any approximation algorithm evaluated. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Background: Voice processing in real-time is challenging. A drawback of previous work for Hypokinetic Dysarthria (HKD) recognition is the requirement of controlled settings in a laboratory environment. A personal digital assistant (PDA) has been developed for home assessment of PD patients. The PDA offers sound processing capabilities, which allow for developing a module for recognition and quantification HKD. Objective: To compose an algorithm for assessment of PD speech severity in the home environment based on a review synthesis. Methods: A two-tier review methodology is utilized. The first tier focuses on real-time problems in speech detection. In the second tier, acoustics features that are robust to medication changes in Levodopa-responsive patients are investigated for HKD recognition. Keywords such as Hypokinetic Dysarthria , and Speech recognition in real time were used in the search engines. IEEE explorer produced the most useful search hits as compared to Google Scholar, ELIN, EBRARY, PubMed and LIBRIS. Results: Vowel and consonant formants are the most relevant acoustic parameters to reflect PD medication changes. Since relevant speech segments (consonants and vowels) contains minority of speech energy, intelligibility can be improved by amplifying the voice signal using amplitude compression. Pause detection and peak to average power rate calculations for voice segmentation produce rich voice features in real time. Enhancements in voice segmentation can be done by inducing Zero-Crossing rate (ZCR). Consonants have high ZCR whereas vowels have low ZCR. Wavelet transform is found promising for voice analysis since it quantizes non-stationary voice signals over time-series using scale and translation parameters. In this way voice intelligibility in the waveforms can be analyzed in each time frame. Conclusions: This review evaluated HKD recognition algorithms to develop a tool for PD speech home-assessment using modern mobile technology. An algorithm that tackles realtime constraints in HKD recognition based on the review synthesis is proposed. We suggest that speech features may be further processed using wavelet transforms and used with a neural network for detection and quantification of speech anomalies related to PD. Based on this model, patients' speech can be automatically categorized according to UPDRS speech ratings.
Resumo:
o objetivo deste trabalho é a análise de barragens de gravidade de concreto desde a faseda sua construção até sua completa entrada em serviço. Inicialmente é feita a análise da fase construtiva, onde o problema fundamental é devido às tensões térmicas decorrentes do calor de hidratação. O método dos elementos finitos é empregado para a solução dos problemasde transferência de calor e de tensões. A influência da construção em camadas é introduzidaatravés da redefinição da malha de elementos finitos, logo após o lançamento de cadacamada de concreto. Uma atenção especial é dada ao problema de fissuração em estruturas de concreto simples.Algunsmodelos usuais são apresentados, discutindo-se a eficiência dos mesmos. Os modelosde fissuração distribuída têm sido preferidos, em virtude dos vários inconvenientes apresentados pelas formulações discretas. Esses modelos, entretanto, fornecem resultados dependentesda malha de elementos finitos e alguma consideração adicional deve ser feita para corrigiressas distorções. Normalmente, tenta-se corrigir esse problema através da adoção de umaresistênciaà tração minorada que é definida em função da energia de fratura do material. Neste trabalho, é demonstrado que esse procedimento não é satisfatório e é proposta uma novaformulaçãopara a análise de grandes estruturas de concreto. A análise das tensões na etapa de construção da barragem é feita com o emprego de um modelo constitutivo viscoelástico com envelhecimento para o concreto. Em virtude do envelhecimento,a matriz de rigidez da estrutura é variável no tempo, devendo ser redefinida e triangularizadaem cada instante. Isto leva a um grande esforço computacional, sobretudo, quandoa barragem é construída em muitas camadas. Para evitar esse inconveniente, adota-se um procedimento iterativo que permite que a matriz de rigidez seja redefinida em poucas idadesde referência. Numa segunda etapa da análise, a barragem é submetida à pressão hidrostática e a uma excitação sísmica. A análise dinâmica é realizada considerando-se o movimento do sistema acoplado barragem-reservatório-fundação. O sismo é considerado um processo estocásticonão estacionário e a segurança da estrutura é determinada em relação aos principais modos de falha
Resumo:
Neste trabalho analisamos processos estocásticos com decaimento polinomial (também chamado hiperbólico) da função de autocorrelação. Nosso estudo tem enfoque nas classes dos Processos ARFIMA e dos Processos obtidos à partir de iterações da transformação de Manneville-Pomeau. Os objetivos principais são comparar diversos métodos de estimação para o parâmetro fracionário do processo ARFIMA, nas situações de estacionariedade e não estacionariedade e, além disso, obter resultados similares para o parâmetro do processo de Manneville-Pomeau. Entre os diversos métodos de estimação para os parâmetros destes dois processos destacamos aquele baseado na teoria de wavelets por ser aquele que teve o melhor desempenho.
Resumo:
This paper the stastistical properties of the real exchange rates of G-5 countries for the Bretton-Woods peiod, and draw implications on the purchasing power parity (PPP) hypothesis. In contrast to most previous studies that consider only unit root and stationary process to describe the real exchange tae, this paper also considers two in-between processes, the locally persistent process ans the fractionally integrated process, to complement past studies. Seeking to be consistent with tha ample evidence of near unit in the real exchange rate movements very well. This finding implies that: 1) the real exchange movement is more persistent than the stationary case but less persistent than the unit root case; 2) the real exchange rate is non-stationary but the PPP reversion occurs and the PPP holds in the long run; 3) the real exchange rate does not exhibit the secular dependence of the fractional integration; 4) the real exchange rate evolves over time in a way that there is persistence over a range of time, but the effect of shocks will eventually disappear over time horizon longer than order O (nd), that is, at finite time horizon; 5) shocks dissipation is fasters than predicted by the fractional integracion, and the total sum of the effects of a unit innovation is finite, implying that a full PPP reversion occurs at finite horizons. These results may explain why pasrt empirical estudies could not provide a clear- conclusion on the real exchange rate processes and the PPP hypothesis.
Resumo:
A dificuldade em se caracterizar alocações ou equilíbrios não estacionários é uma das principais explicações para a utilização de conceitos e hipóteses que trivializam a dinâmica da economia. Tal dificuldade é especialmente crítica em Teoria Monetária, em que a dimensionalidade do problema é alta mesmo para modelos muito simples. Neste contexto, o presente trabalho relata a estratégia computacional de implementação do método recursivo proposto por Monteiro e Cavalcanti (2006), o qual permite calcular a sequência ótima (possivelmente não estacionária) de distribuições de moeda em uma extensão do modelo proposto por Kiyotaki e Wright (1989). Três aspectos deste cálculo são enfatizados: (i) a implementação computacional do problema do planejador envolve a escolha de variáveis contínuas e discretas que maximizem uma função não linear e satisfaçam restrições não lineares; (ii) a função objetivo deste problema não é côncava e as restrições não são convexas; e (iii) o conjunto de escolhas admissíveis não é conhecido a priori. O objetivo é documentar as dificuldades envolvidas, as soluções propostas e os métodos e recursos disponíveis para a implementação numérica da caracterização da dinâmica monetária eficiente sob a hipótese de encontros aleatórios.
Resumo:
The evolution of wireless communication systems leads to Dynamic Spectrum Allocation for Cognitive Radio, which requires reliable spectrum sensing techniques. Among the spectrum sensing methods proposed in the literature, those that exploit cyclostationary characteristics of radio signals are particularly suitable for communication environments with low signal-to-noise ratios, or with non-stationary noise. However, such methods have high computational complexity that directly raises the power consumption of devices which often have very stringent low-power requirements. We propose a strategy for cyclostationary spectrum sensing with reduced energy consumption. This strategy is based on the principle that p processors working at slower frequencies consume less power than a single processor for the same execution time. We devise a strict relation between the energy savings and common parallel system metrics. The results of simulations show that our strategy promises very significant savings in actual devices.
Resumo:
In recent years, the DFA introduced by Peng, was established as an important tool capable of detecting long-range autocorrelation in time series with non-stationary. This technique has been successfully applied to various areas such as: Econophysics, Biophysics, Medicine, Physics and Climatology. In this study, we used the DFA technique to obtain the Hurst exponent (H) of the profile of electric density profile (RHOB) of 53 wells resulting from the Field School of Namorados. In this work we want to know if we can or not use H to spatially characterize the spatial data field. Two cases arise: In the first a set of H reflects the local geology, with wells that are geographically closer showing similar H, and then one can use H in geostatistical procedures. In the second case each well has its proper H and the information of the well are uncorrelated, the profiles show only random fluctuations in H that do not show any spatial structure. Cluster analysis is a method widely used in carrying out statistical analysis. In this work we use the non-hierarchy method of k-means. In order to verify whether a set of data generated by the k-means method shows spatial patterns, we create the parameter Ω (index of neighborhood). High Ω shows more aggregated data, low Ω indicates dispersed or data without spatial correlation. With help of this index and the method of Monte Carlo. Using Ω index we verify that random cluster data shows a distribution of Ω that is lower than actual cluster Ω. Thus we conclude that the data of H obtained in 53 wells are grouped and can be used to characterize space patterns. The analysis of curves level confirmed the results of the k-means
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
The objective of this study was to produce biofuels (bio-oil and gas) from the thermal treatment of sewage sludge in rotating cylinder, aiming industrial applications. The biomass was characterized by immediate and instrumental analysis (elemental analysis, scanning electron microscopy - SEM, X-ray diffraction, infrared spectroscopy and ICP-OES). A kinetic study on non-stationary regime was done to calculate the activation energy by Thermal Gravimetric Analysis evaluating thermochemical and thermocatalytic process of sludge, the latter being in the presence of USY zeolite. As expected, the activation energy evaluated by the mathematical model "Model-free kinetics" applying techniques isoconversionais was lowest for the catalytic tests (57.9 to 108.9 kJ/mol in the range of biomass conversion of 40 to 80%). The pyrolytic plant at a laboratory scale reactor consists of a rotating cylinder whose length is 100 cm with capable of processing up to 1 kg biomass/h. In the process of pyrolysis thermochemical were studied following parameters: temperature of reaction (500 to 600 ° C), flow rate of carrier gas (50 to 200 mL/min), frequency of rotation of centrifugation for condensation of bio-oil (20 to 30 Hz) and flow of biomass (4 and 22 g/min). Products obtained during the process (pyrolytic liquid, coal and gas) were characterized by classical and instrumental analytical techniques. The maximum yield of liquid pyrolytic was approximately 10.5% obtained in the conditions of temperature of 500 °C, centrifugation speed of 20 Hz, an inert gas flow of 200 mL/min and feeding of biomass 22 g/min. The highest yield obtained for the gas phase was 23.3% for the temperature of 600 °C, flow rate of 200 mL/min inert, frequency of rotation of the column of vapor condensation 30 Hz and flow of biomass of 22 g/min. The non-oxygenated aliphatic hydrocarbons were found in greater proportion in the bio-oil (55%) followed by aliphatic oxygenated (27%). The bio-oil had the following characteristics: pH 6.81, density between 1.05 and 1.09 g/mL, viscosity between 2.5 and 3.1 cSt and highest heating value between 16.91 and 17.85 MJ/ kg. The main components in the gas phase were: H2, CO, CO2 and CH4. Hydrogen was the main constituent of the gas mixture, with a yield of about 46.2% for a temperature of 600 ° C. Among the hydrocarbons formed, methane was found in higher yield (16.6%) for the temperature 520 oC. The solid phase obtained showed a high ash content (70%) due to the abundant presence of metals in coal, in particular iron, which was also present in bio-oil with a rate of 0.068% in the test performed at a temperature of 500 oC.
Resumo:
On-line learning methods have been applied successfully in multi-agent systems to achieve coordination among agents. Learning in multi-agent systems implies in a non-stationary scenario perceived by the agents, since the behavior of other agents may change as they simultaneously learn how to improve their actions. Non-stationary scenarios can be modeled as Markov Games, which can be solved using the Minimax-Q algorithm a combination of Q-learning (a Reinforcement Learning (RL) algorithm which directly learns an optimal control policy) and the Minimax algorithm. However, finding optimal control policies using any RL algorithm (Q-learning and Minimax-Q included) can be very time consuming. Trying to improve the learning time of Q-learning, we considered the QS-algorithm. in which a single experience can update more than a single action value by using a spreading function. In this paper, we contribute a Minimax-QS algorithm which combines the Minimax-Q algorithm and the QS-algorithm. We conduct a series of empirical evaluation of the algorithm in a simplified simulator of the soccer domain. We show that even using a very simple domain-dependent spreading function, the performance of the learning algorithm can be improved.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)