998 resultados para Non-stationarity
Resumo:
Annual average daily traffic (AADT) is important information for many transportation planning, design, operation, and maintenance activities, as well as for the allocation of highway funds. Many studies have attempted AADT estimation using factor approach, regression analysis, time series, and artificial neural networks. However, these methods are unable to account for spatially variable influence of independent variables on the dependent variable even though it is well known that to many transportation problems, including AADT estimation, spatial context is important. ^ In this study, applications of geographically weighted regression (GWR) methods to estimating AADT were investigated. The GWR based methods considered the influence of correlations among the variables over space and the spatially non-stationarity of the variables. A GWR model allows different relationships between the dependent and independent variables to exist at different points in space. In other words, model parameters vary from location to location and the locally linear regression parameters at a point are affected more by observations near that point than observations further away. ^ The study area was Broward County, Florida. Broward County lies on the Atlantic coast between Palm Beach and Miami-Dade counties. In this study, a total of 67 variables were considered as potential AADT predictors, and six variables (lanes, speed, regional accessibility, direct access, density of roadway length, and density of seasonal household) were selected to develop the models. ^ To investigate the predictive powers of various AADT predictors over the space, the statistics including local r-square, local parameter estimates, and local errors were examined and mapped. The local variations in relationships among parameters were investigated, measured, and mapped to assess the usefulness of GWR methods. ^ The results indicated that the GWR models were able to better explain the variation in the data and to predict AADT with smaller errors than the ordinary linear regression models for the same dataset. Additionally, GWR was able to model the spatial non-stationarity in the data, i.e., the spatially varying relationship between AADT and predictors, which cannot be modeled in ordinary linear regression. ^
Resumo:
Periods of drought and low streamflow can have profound impacts on both human and natural systems. People depend on a reliable source of water for numerous reasons including potable water supply and to produce economic value through agriculture or energy production. Aquatic ecosystems depend on water in addition to the economic benefits they provide to society through ecosystem services. Given that periods of low streamflow may become more extreme and frequent in the future, it is important to study the factors that control water availability during these times. In the absence of precipitation the slower hydrological response of groundwater systems will play an amplified role in water supply. Understanding the variability of the fraction of streamflow contribution from baseflow or groundwater during periods of drought provides insight into what future water availability may look like and how it can best be managed. The Mills River Basin in North Carolina is chosen as a case-study to test this understanding. First, obtaining a physically meaningful estimation of baseflow from USGS streamflow data via computerized hydrograph analysis techniques is carried out. Then applying a method of time series analysis including wavelet analysis can highlight signals of non-stationarity and evaluate the changes in variance required to better understand the natural variability of baseflow and low flows. In addition to natural variability, human influence must be taken into account in order to accurately assess how the combined system reacts to periods of low flow. Defining a combined demand that consists of both natural and human demand allows us to be more rigorous in assessing the level of sustainable use of a shared resource, in this case water. The analysis of baseflow variability can differ based on regional location and local hydrogeology, but it was found that baseflow varies from multiyear scales such as those associated with ENSO (3.5, 7 years) up to multi decadal time scales, but with most of the contributing variance coming from decadal or multiyear scales. It was also found that the behavior of baseflow and subsequently water availability depends a great deal on overall precipitation, the tracks of hurricanes or tropical storms and associated climate indices, as well as physiography and hydrogeology. Evaluating and utilizing the Duke Combined Hydrology Model (DCHM), reasonably accurate estimates of streamflow during periods of low flow were obtained in part due to the model’s ability to capture subsurface processes. Being able to accurately simulate streamflow levels and subsurface interactions during periods of drought can be very valuable to water suppliers, decision makers, and ultimately impact citizens. Knowledge of future droughts and periods of low flow in addition to tracking customer demand will allow for better management practices on the part of water suppliers such as knowing when they should withdraw more water during a surplus so that the level of stress on the system is minimized when there is not ample water supply.
Resumo:
The main objetive of this research is to evaluate the long term relationship between energy consumption and GDP for some Latin American countries in the period 1980-2009 -- The estimation has been done through the non-stationary panel approach, using the production function in order to control other sources of GDP variation, such as capital and labor -- In addition to this, a panel unit root tests are used in order to identify the non-stationarity of these variables, followed by the application of panel cointegration test proposed by Pedroni (2004) to avoid a spurious regression (Entorf, 1997; Kao, 1999)
Resumo:
Doutoramento em Engenharia Florestal e dos Recursos Naturais - Instituto Superior de Agronomia - UL
Resumo:
The current approach to data analysis for the Laser Interferometry Space Antenna (LISA) depends on the time delay interferometry observables (TDI) which have to be generated before any weak signal detection can be performed. These are linear combinations of the raw data with appropriate time shifts that lead to the cancellation of the laser frequency noises. This is possible because of the multiple occurrences of the same noises in the different raw data. Originally, these observables were manually generated starting with LISA as a simple stationary array and then adjusted to incorporate the antenna's motions. However, none of the observables survived the flexing of the arms in that they did not lead to cancellation with the same structure. The principal component approach is another way of handling these noises that was presented by Romano and Woan which simplified the data analysis by removing the need to create them before the analysis. This method also depends on the multiple occurrences of the same noises but, instead of using them for cancellation, it takes advantage of the correlations that they produce between the different readings. These correlations can be expressed in a noise (data) covariance matrix which occurs in the Bayesian likelihood function when the noises are assumed be Gaussian. Romano and Woan showed that performing an eigendecomposition of this matrix produced two distinct sets of eigenvalues that can be distinguished by the absence of laser frequency noise from one set. The transformation of the raw data using the corresponding eigenvectors also produced data that was free from the laser frequency noises. This result led to the idea that the principal components may actually be time delay interferometry observables since they produced the same outcome, that is, data that are free from laser frequency noise. The aims here were (i) to investigate the connection between the principal components and these observables, (ii) to prove that the data analysis using them is equivalent to that using the traditional observables and (ii) to determine how this method adapts to real LISA especially the flexing of the antenna. For testing the connection between the principal components and the TDI observables a 10x 10 covariance matrix containing integer values was used in order to obtain an algebraic solution for the eigendecomposition. The matrix was generated using fixed unequal arm lengths and stationary noises with equal variances for each noise type. Results confirm that all four Sagnac observables can be generated from the eigenvectors of the principal components. The observables obtained from this method however, are tied to the length of the data and are not general expressions like the traditional observables, for example, the Sagnac observables for two different time stamps were generated from different sets of eigenvectors. It was also possible to generate the frequency domain optimal AET observables from the principal components obtained from the power spectral density matrix. These results indicate that this method is another way of producing the observables therefore analysis using principal components should give the same results as that using the traditional observables. This was proven by fact that the same relative likelihoods (within 0.3%) were obtained from the Bayesian estimates of the signal amplitude of a simple sinusoidal gravitational wave using the principal components and the optimal AET observables. This method fails if the eigenvalues that are free from laser frequency noises are not generated. These are obtained from the covariance matrix and the properties of LISA that are required for its computation are the phase-locking, arm lengths and noise variances. Preliminary results of the effects of these properties on the principal components indicate that only the absence of phase-locking prevented their production. The flexing of the antenna results in time varying arm lengths which will appear in the covariance matrix and, from our toy model investigations, this did not prevent the occurrence of the principal components. The difficulty with flexing, and also non-stationary noises, is that the Toeplitz structure of the matrix will be destroyed which will affect any computation methods that take advantage of this structure. In terms of separating the two sets of data for the analysis, this was not necessary because the laser frequency noises are very large compared to the photodetector noises which resulted in a significant reduction in the data containing them after the matrix inversion. In the frequency domain the power spectral density matrices were block diagonals which simplified the computation of the eigenvalues by allowing them to be done separately for each block. The results in general showed a lack of principal components in the absence of phase-locking except for the zero bin. The major difference with the power spectral density matrix is that the time varying arm lengths and non-stationarity do not show up because of the summation in the Fourier transform.
Resumo:
In this article we use an autoregressive fractionally integrated moving average approach to measure the degree of fractional integration of aggregate world CO2 emissions and its five components – coal, oil, gas, cement, and gas flaring. We find that all variables are stationary and mean reverting, but exhibit long-term memory. Our results suggest that both coal and oil combustion emissions have the weakest degree of long-range dependence, while emissions from gas and gas flaring have the strongest. With evidence of long memory, we conclude that transitory policy shocks are likely to have long-lasting effects, but not permanent effects. Accordingly, permanent effects on CO2 emissions require a more permanent policy stance. In this context, if one were to rely only on testing for stationarity and non-stationarity, one would likely conclude in favour of non-stationarity, and therefore that even transitory policy shocks
Resumo:
The goal of this study was to investigate the impact of computing parameters and the location of volumes of interest (VOI) on the calculation of 3D noise power spectrum (NPS) in order to determine an optimal set of computing parameters and propose a robust method for evaluating the noise properties of imaging systems. Noise stationarity in noise volumes acquired with a water phantom on a 128-MDCT and a 320-MDCT scanner were analyzed in the spatial domain in order to define locally stationary VOIs. The influence of the computing parameters in the 3D NPS measurement: the sampling distances bx,y,z and the VOI lengths Lx,y,z, the number of VOIs NVOI and the structured noise were investigated to minimize measurement errors. The effect of the VOI locations on the NPS was also investigated. Results showed that the noise (standard deviation) varies more in the r-direction (phantom radius) than z-direction plane. A 25 × 25 × 40 mm(3) VOI associated with DFOV = 200 mm (Lx,y,z = 64, bx,y = 0.391 mm with 512 × 512 matrix) and a first-order detrending method to reduce structured noise led to an accurate NPS estimation. NPS estimated from off centered small VOIs had a directional dependency contrary to NPS obtained from large VOIs located in the center of the volume or from small VOIs located on a concentric circle. This showed that the VOI size and location play a major role in the determination of NPS when images are not stationary. This study emphasizes the need for consistent measurement methods to assess and compare image quality in CT.
Resumo:
There are several determinants that influence household location decisions. More concretely, recent economic literature assigns an increasingly important role to the variables governing quality of life. Nevertheless, the spatial stationarity of the parameters is implicitly assumed in most studies. Here we analyse the role of quality of life in urban economics and test for the spatial stationarity of the relationship between city growth and quality of life.
Différents procédés statistiques pour détecter la non-stationnarité dans les séries de précipitation
Resumo:
Ce mémoire a pour objectif de déterminer si les précipitations convectives estivales simulées par le modèle régional canadien du climat (MRCC) sont stationnaires ou non à travers le temps. Pour répondre à cette question, nous proposons une méthodologie statistique de type fréquentiste et une de type bayésien. Pour l'approche fréquentiste, nous avons utilisé le contrôle de qualité standard ainsi que le CUSUM afin de déterminer si la moyenne a augmenté à travers les années. Pour l'approche bayésienne, nous avons comparé la distribution a posteriori des précipitations dans le temps. Pour ce faire, nous avons modélisé la densité \emph{a posteriori} d'une période donnée et nous l'avons comparée à la densité a posteriori d'une autre période plus éloignée dans le temps. Pour faire la comparaison, nous avons utilisé une statistique basée sur la distance d'Hellinger, la J-divergence ainsi que la norme L2. Au cours de ce mémoire, nous avons utilisé l'ARL (longueur moyenne de la séquence) pour calibrer et pour comparer chacun de nos outils. Une grande partie de ce mémoire sera donc dédiée à l'étude de l'ARL. Une fois nos outils bien calibrés, nous avons utilisé les simulations pour les comparer. Finalement, nous avons analysé les données du MRCC pour déterminer si elles sont stationnaires ou non.
Resumo:
We investigate for 26 OECD economies whether their current account imbalances to GDP are driven by stochastic trends. Regarding bounded stationarity as the more natural counterpart of sustainability, results from Phillips–Perron tests for unit root and bounded unit root processes are contrasted. While the former hint at stationarity of current account imbalances for 12 economies, the latter indicate bounded stationarity for only six economies. Through panel-based test statistics, current account imbalances are diagnosed as bounded non-stationary. Thus, (spurious) rejections of the unit root hypothesis might be due to the existence of bounds reflecting hidden policy controls or financial crises.
Resumo:
We study the optimal “inflation tax” in an environment with heterogeneous agents and non-linear income taxes. We first derive the general conditions needed for the optimality of the Friedman rule in this setup. These general conditions are distinct in nature and more easily interpretable than those obtained in the literature with a representative agent and linear taxation. We then study two standard monetary specifications and derive their implications for the optimality of the Friedman rule. For the shopping-time model the Friedman rule is optimal with essentially no restrictions on preferences or transaction technologies. For the cash-credit model the Friedman rule is optimal if preferences are separable between the consumption goods and leisure, or if leisure shifts consumption towards the credit good. We also study a generalized model which nests both models as special cases.
Resumo:
In most epidemiological studies, historical monitoring data are scant and must be pooled to identify occupational groups with homogeneous exposures. Homogeneity of exposure is generally assessed in a group of workers who share a common job title or work in a common area. While published results suggest that the degree of homogeneity varies widely across job groups, less is known whether such variation differs across industrial sectors, classes of contaminants, or in the methods used to group workers. Relying upon a compilation of results presented in the literature, patterns of homogeneity among nearly 500 occupational groups of workers were evaluated on the basis of type of industry and agent. Additionally, effects of the characteristics of the sampling strategy on estimated indicators of homogeneity of exposure were assessed. ^ Exposure profiles for occupational groups of workers have typically been assessed under the assumption of stationarity, i.e., the mean exposure level and variance of the distribution that describes the underlying population of exposures are constant over time. Yet, the literature has shown that occupational exposures have declined in the last decades. This renders traditional methods for the description of exposure profiles inadequate. Thus, work was needed to develop appropriate methods to assess homogeneity for groups of workers whose exposures have changed over time. A study was carried out applying mixed effects models with a term for temporal trend to appropriately describe exposure profiles of groups of workers in the nickel-producing industry over a 20-year period. Using a sub-set of groups of nickel-exposed workers, another study was conducted to develop and apply a framework to evaluate the assumption of stationarity of the variances in the presence of systematic changes in exposure levels over time. ^
Resumo:
Statistical approaches to study extreme events require, by definition, long time series of data. In many scientific disciplines, these series are often subject to variations at different temporal scales that affect the frequency and intensity of their extremes. Therefore, the assumption of stationarity is violated and alternative methods to conventional stationary extreme value analysis (EVA) must be adopted. Using the example of environmental variables subject to climate change, in this study we introduce the transformed-stationary (TS) methodology for non-stationary EVA. This approach consists of (i) transforming a non-stationary time series into a stationary one, to which the stationary EVA theory can be applied, and (ii) reverse transforming the result into a non-stationary extreme value distribution. As a transformation, we propose and discuss a simple time-varying normalization of the signal and show that it enables a comprehensive formulation of non-stationary generalized extreme value (GEV) and generalized Pareto distribution (GPD) models with a constant shape parameter. A validation of the methodology is carried out on time series of significant wave height, residual water level, and river discharge, which show varying degrees of long-term and seasonal variability. The results from the proposed approach are comparable with the results from (a) a stationary EVA on quasi-stationary slices of non-stationary series and (b) the established method for non-stationary EVA. However, the proposed technique comes with advantages in both cases. For example, in contrast to (a), the proposed technique uses the whole time horizon of the series for the estimation of the extremes, allowing for a more accurate estimation of large return levels. Furthermore, with respect to (b), it decouples the detection of non-stationary patterns from the fitting of the extreme value distribution. As a result, the steps of the analysis are simplified and intermediate diagnostics are possible. In particular, the transformation can be carried out by means of simple statistical techniques such as low-pass filters based on the running mean and the standard deviation, and the fitting procedure is a stationary one with a few degrees of freedom and is easy to implement and control. An open-source MAT-LAB toolbox has been developed to cover this methodology, which is available at https://github.com/menta78/tsEva/(Mentaschi et al., 2016).
Resumo:
The aim of the study was to analyze the frequency of epidermal growth factor receptor (EGFR) mutations in Brazilian non-small cell lung cancer patients and to correlate these mutations with response to benefit of platinum-based chemotherapy in non-small cell lung cancer (NSCLC). Our cohort consisted of prospective patients with NSCLCs who received chemotherapy (platinum derivates plus paclitaxel) at the [UNICAMP], Brazil. EGFR exons 18-21 were analyzed in tumor-derived DNA. Fifty patients were included in the study (25 with adenocarcinoma). EGFR mutations were identified in 6/50 (12 %) NSCLCs and in 6/25 (24 %) adenocarcinomas; representing the frequency of EGFR mutations in a mostly self-reported White (82.0 %) southeastern Brazilian population of NSCLCs. Patients with NSCLCs harboring EGFR exon 19 deletions or the exon 21 L858R mutation were found to have a higher chance of response to platinum-paclitaxel (OR 9.67 [95 % CI 1.03-90.41], p = 0.047). We report the frequency of EGFR activating mutations in a typical southeastern Brazilian population with NSCLC, which are similar to that of other countries with Western European ethnicity. EGFR mutations seem to be predictive of a response to platinum-paclitaxel, and additional studies are needed to confirm or refute this relationship.
Resumo:
The metabolic enzyme fatty acid synthase (FASN) is responsible for the endogenous synthesis of palmitate, a saturated long-chain fatty acid. In contrast to most normal tissues, a variety of human cancers overexpress FASN. One such cancer is cutaneous melanoma, in which the level of FASN expression is associated with tumor invasion and poor prognosis. We previously reported that two FASN inhibitors, cerulenin and orlistat, induce apoptosis in B16-F10 mouse melanoma cells via the intrinsic apoptosis pathway. Here, we investigated the effects of these inhibitors on non-tumorigenic melan-a cells. Cerulenin and orlistat treatments were found to induce apoptosis and decrease cell proliferation, in addition to inducing the release of mitochondrial cytochrome c and activating caspases-9 and -3. Transfection with FASN siRNA did not result in apoptosis. Mass spectrometry analysis demonstrated that treatment with the FASN inhibitors did not alter either the mitochondrial free fatty acid content or composition. This result suggests that cerulenin- and orlistat-induced apoptosis events are independent of FASN inhibition. Analysis of the energy-linked functions of melan-a mitochondria demonstrated the inhibition of respiration, followed by a significant decrease in mitochondrial membrane potential (ΔΨm) and the stimulation of superoxide anion generation. The inhibition of NADH-linked substrate oxidation was approximately 40% and 61% for cerulenin and orlistat treatments, respectively, and the inhibition of succinate oxidation was approximately 46% and 52%, respectively. In contrast, no significant inhibition occurred when respiration was supported by the complex IV substrate N,N,N',N'-tetramethyl-p-phenylenediamine (TMPD). The protection conferred by the free radical scavenger N-acetyl-cysteine indicates that the FASN inhibitors induced apoptosis through an oxidative stress-associated mechanism. In combination, the present results demonstrate that cerulenin and orlistat induce apoptosis in non-tumorigenic cells via mitochondrial dysfunction, independent of FASN inhibition.