33 resultados para time series studies
Resumo:
This case study deals with the role of time series analysis in sociology, and its relationship with the wider literature and methodology of comparative case study research. Time series analysis is now well-represented in top-ranked sociology journals, often in the form of ‘pooled time series’ research designs. These studies typically pool multiple countries together into a pooled time series cross-section panel, in order to provide a larger sample for more robust and comprehensive analysis. This approach is well suited to exploring trans-national phenomena, and for elaborating useful macro-level theories specific to social structures, national policies, and long-term historical processes. It is less suited however, to understanding how these global social processes work in different countries. As such, the complexities of individual countries - which often display very different or contradictory dynamics than those suggested in pooled studies – are subsumed. Meanwhile, a robust literature on comparative case-based methods exists in the social sciences, where researchers focus on differences between cases, and the complex ways in which they co-evolve or diverge over time. A good example of this is the inequality literature, where although panel studies suggest a general trend of rising inequality driven by the weakening power of labour, marketisation of welfare, and the rising power of capital, some countries have still managed to remain resilient. This case study takes a closer look at what can be learned by applying the insights of case-based comparative research to the method of time series analysis. Taking international income inequality as its point of departure, it argues that we have much to learn about the viability of different combinations of policy options by examining how they work in different countries over time. By taking representative cases from different welfare systems (liberal, social democratic, corporatist, or antipodean), we can better sharpen our theories of how policies can be more specifically engineered to offset rising inequality. This involves a fundamental realignment of the strategy of time series analysis, grounding it instead in a qualitative appreciation of the historical context of cases, as a basis for comparing effects between different countries.
Resumo:
Many modeling problems require to estimate a scalar output from one or more time series. Such problems are usually tackled by extracting a fixed number of features from the time series (like their statistical moments), with a consequent loss in information that leads to suboptimal predictive models. Moreover, feature extraction techniques usually make assumptions that are not met by real world settings (e.g. uniformly sampled time series of constant length), and fail to deliver a thorough methodology to deal with noisy data. In this paper a methodology based on functional learning is proposed to overcome the aforementioned problems; the proposed Supervised Aggregative Feature Extraction (SAFE) approach allows to derive continuous, smooth estimates of time series data (yielding aggregate local information), while simultaneously estimating a continuous shape function yielding optimal predictions. The SAFE paradigm enjoys several properties like closed form solution, incorporation of first and second order derivative information into the regressor matrix, interpretability of the generated functional predictor and the possibility to exploit Reproducing Kernel Hilbert Spaces setting to yield nonlinear predictive models. Simulation studies are provided to highlight the strengths of the new methodology w.r.t. standard unsupervised feature selection approaches. © 2012 IEEE.
Resumo:
In many applications, and especially those where batch processes are involved, a target scalar output of interest is often dependent on one or more time series of data. With the exponential growth in data logging in modern industries such time series are increasingly available for statistical modeling in soft sensing applications. In order to exploit time series data for predictive modelling, it is necessary to summarise the information they contain as a set of features to use as model regressors. Typically this is done in an unsupervised fashion using simple techniques such as computing statistical moments, principal components or wavelet decompositions, often leading to significant information loss and hence suboptimal predictive models. In this paper, a functional learning paradigm is exploited in a supervised fashion to derive continuous, smooth estimates of time series data (yielding aggregated local information), while simultaneously estimating a continuous shape function yielding optimal predictions. The proposed Supervised Aggregative Feature Extraction (SAFE) methodology can be extended to support nonlinear predictive models by embedding the functional learning framework in a Reproducing Kernel Hilbert Spaces setting. SAFE has a number of attractive features including closed form solution and the ability to explicitly incorporate first and second order derivative information. Using simulation studies and a practical semiconductor manufacturing case study we highlight the strengths of the new methodology with respect to standard unsupervised feature extraction approaches.
Resumo:
Historical GIS has the potential to re-invigorate our use of statistics from historical censuses and related sources. In particular, areal interpolation can be used to create long-run time-series of spatially detailed data that will enable us to enhance significantly our understanding of geographical change over periods of a century or more. The difficulty with areal interpolation, however, is that the data that it generates are estimates which will inevitably contain some error. This paper describes a technique that allows the automated identification of possible errors at the level of the individual data values.
Resumo:
Objectives: Methicillin-resistant Staphylococcus aureus (MRSA) is a major nosocomial pathogen worldwide. A wide range of factors have been suggested to influence the spread of MRSA. The objective of this study was to evaluate the effect of antimicrobial drug use and infection control practices on nosocomial MRSA incidence in a 426-bed general teaching hospital in Northern Ireland.
Methods: The present research involved the retrospective collection of monthly data on the usage of antibiotics and on infection control practices within the hospital over a 5 year period (January 2000–December 2004). A multivariate ARIMA (time-series analysis) model was built to relate MRSA incidence with antibiotic use and infection control practices.
Results: Analysis of the 5 year data set showed that temporal variations in MRSA incidence followed temporal variations in the use of fluoroquinolones, third-generation cephalosporins, macrolides and amoxicillin/clavulanic acid (coefficients = 0.005, 0.03, 0.002 and 0.003, respectively, with various time lags). Temporal relationships were also observed between MRSA incidence and infection control practices, i.e. the number of patients actively screened for MRSA (coefficient = -0.007), the use of alcohol-impregnated wipes (coefficient = -0.0003) and the bulk orders of alcohol-based handrub (coefficients = -0.04 and -0.08), with increased infection control activity being associated with decreased MRSA incidence, and between MRSA incidence and the number of new patients admitted with MRSA (coefficient = 0.22). The model explained 78.4% of the variance in the monthly incidence of MRSA.
Conclusions: The results of this study confirm the value of infection control policies as well as suggest the usefulness of restricting the use of certain antimicrobial classes to control MRSA.
Resumo:
Recent advances in the study of quantum vibrations and rotations in the fundamental hydrogen molecules are reported. Using the deuterium molecules (D-2(+) and D-2) as exemplars, the application of ultrafast femtosecond pump-probe experiments to study the creation and time-resolved imaging of coherent nuclear wavepackets is discussed. The ability to study the motion of these fundamental molecules in the time-domain is a notable milestone, made possible through the advent of ultrashort intense laser pulses with durations on sub-vibrational (and sub-rotational) timescales. Quantum wavepacket revivals are characterised for both vibrational and rotational degrees of freedom and quantum models are used to provide a detailed discussion of the underlying ultrafast physical dynamics for the specialist and non-specialist alike. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Based on an algorithm for pattern matching in character strings, we implement a pattern matching machine that searches for occurrences of patterns in multidimensional time series. Before the search process takes place, time series are encoded in user-designed alphabets. The patterns, on the other hand, are formulated as regular expressions that are composed of letters from these alphabets and operators. Furthermore, we develop a genetic algorithm to breed patterns that maximize a user-defined fitness function. In an application to financial data, we show that patterns bred to predict high exchange rates volatility in training samples retain statistically significant predictive power in validation samples.
Resumo:
The stochastic nature of oil price fluctuations is investigated over a twelve-year period, borrowing feedback from an existing database (USA Energy Information Administration database, available online). We evaluate the scaling exponents of the fluctuations by employing different statistical analysis methods, namely rescaled range analysis (R/S), scale windowed variance analysis (SWV) and the generalized Hurst exponent (GH) method. Relying on the scaling exponents obtained, we apply a rescaling procedure to investigate the complex characteristics of the probability density functions (PDFs) dominating oil price fluctuations. It is found that PDFs exhibit scale invariance, and in fact collapse onto a single curve when increments are measured over microscales (typically less than 30 days). The time evolution of the distributions is well fitted by a Levy-type stable distribution. The relevance of a Levy distribution is made plausible by a simple model of nonlinear transfer. Our results also exhibit a degree of multifractality as the PDFs change and converge toward to a Gaussian distribution at the macroscales.
Resumo:
Objective To evaluate the feasibility of conducting a definitive study to assess the impact of introducing a rapid PCR-based test for candidemia on antifungal drug prescribing. Method Prospective, single centre, interrupted time series study consisting of three periods of six months' duration. The assay was available during the second period, during which the PCR assay was available for routine use by physicians Monday–Friday with guaranteed 24-h turnaround time. For each period total antifungal drug use, expressed as treatment-days, was recorded and an adjustment was made to exclude estimated use for proven candidemia. Also, during the intervention period, antifungal prescribing decisions for up to 72 h after each PCR result became available were recorded as either concordant or discordant with that result. Results While overall antifungal use remained relatively stable throughout, after adjustment for candidemia, there was a 38% reduction in use following introduction of the PCR test; however, this was nonsignificant at the 95% level. During the intervention period overall concordance between the PCR result and prescribing decisions was 84%. Conclusions The PCR assay for candidemia was requested, prescribing decisions were generally concordant with the results produced and there was an apparent decrease in antifungal prescription, although this was sustained even after withdrawal of the intervention; these findings should be more thoroughly evaluated in a larger trial.
Resumo:
The validity of load estimates from intermittent, instantaneous grab sampling is dependent on adequate spatial coverage by monitoring networks and a sampling frequency that re?ects the variability in the system under study. Catchments with a ?ashy hydrology due to surface runoff pose a particular challenge as intense short duration rainfall events may account for a signi?cant portion of the total diffuse transfer of pollution from soil to water in any hydrological year. This can also be exacerbated by the presence of strong background pollution signals from point sources during low flows. In this paper, a range of sampling methodologies and load estimation techniques are applied to phosphorus data from such a surface water dominated river system, instrumented at three sub-catchments (ranging from 3 to 5 km2 in area) with near-continuous monitoring stations. Systematic and Monte Carlo approaches were applied to simulate grab sampling using multiple strategies and to calculate an estimated load, Le based on established load estimation methods. Comparison with the actual load, Lt, revealed signi?cant average underestimation, of up to 60%, and high variability for all feasible sampling approaches. Further analysis of the time series provides an insight into these observations; revealing peak frequencies and power-law scaling in the distributions of P concentration, discharge and load associated with surface runoff and background transfers. Results indicate that only near-continuous monitoring that re?ects the rapid temporal changes in these river systems is adequate for comparative monitoring and evaluation purposes. While the implications of this analysis may be more tenable to small scale ?ashy systems, this represents an appropriate scale in terms of evaluating catchment mitigation strategies such as agri-environmental policies for managing diffuse P transfers in complex landscapes.
Resumo:
In this paper we investigate the influence of a power-law noise model, also called noise, on the performance of a feed-forward neural network used to predict time series. We introduce an optimization procedure that optimizes the parameters the neural networks by maximizing the likelihood function based on the power-law model. We show that our optimization procedure minimizes the mean squared leading to an optimal prediction. Further, we present numerical results applying method to time series from the logistic map and the annual number of sunspots demonstrate that a power-law noise model gives better results than a Gaussian model.