961 resultados para Linear Models in Temporal Series
Resumo:
[EN] In this work we propose a new variational model for the consistent estimation of motion fields. The aim of this work is to develop appropriate spatio-temporal coherence models. In this sense, we propose two main contributions: a nonlinear flow constancy assumption, similar in spirit to the nonlinear brightness constancy assumption, which conveniently relates flow fields at different time instants; and a nonlinear temporal regularization scheme, which complements the spatial regularization and can cope with piecewise continuous motion fields. These contributions pose a congruent variational model since all the energy terms, except the spatial regularization, are based on nonlinear warpings of the flow field. This model is more general than its spatial counterpart, provides more accurate solutions and preserves the continuity of optical flows in time. In the experimental results, we show that the method attains better results and, in particular, it considerably improves the accuracy in the presence of large displacements.
Resumo:
Despite the widespread popularity of linear models for correlated outcomes (e.g. linear mixed modesl and time series models), distribution diagnostic methodology remains relatively underdeveloped in this context. In this paper we present an easy-to-implement approach that lends itself to graphical displays of model fit. Our approach involves multiplying the estimated marginal residual vector by the Cholesky decomposition of the inverse of the estimated marginal variance matrix. Linear functions or the resulting "rotated" residuals are used to construct an empirical cumulative distribution function (ECDF), whose stochastic limit is characterized. We describe a resampling technique that serves as a computationally efficient parametric bootstrap for generating representatives of the stochastic limit of the ECDF. Through functionals, such representatives are used to construct global tests for the hypothesis of normal margional errors. In addition, we demonstrate that the ECDF of the predicted random effects, as described by Lange and Ryan (1989), can be formulated as a special case of our approach. Thus, our method supports both omnibus and directed tests. Our method works well in a variety of circumstances, including models having independent units of sampling (clustered data) and models for which all observations are correlated (e.g., a single time series).
Resumo:
A time series is a sequence of observations made over time. Examples in public health include daily ozone concentrations, weekly admissions to an emergency department or annual expenditures on health care in the United States. Time series models are used to describe the dependence of the response at each time on predictor variables including covariates and possibly previous values in the series. Time series methods are necessary to account for the correlation among repeated responses over time. This paper gives an overview of time series ideas and methods used in public health research.
Resumo:
This paper introduces and analyzes a stochastic search method for parameter estimation in linear regression models in the spirit of Beran and Millar [Ann. Statist. 15(3) (1987) 1131–1154]. The idea is to generate a random finite subset of a parameter space which will automatically contain points which are very close to an unknown true parameter. The motivation for this procedure comes from recent work of Dümbgen et al. [Ann. Statist. 39(2) (2011) 702–730] on regression models with log-concave error distributions.
Resumo:
The Atlantic subpolar gyre (SPG) is one of the main drivers of decadal climate variability in the North Atlantic. Here we analyze its dynamics in pre-industrial control simulations of 19 different comprehensive coupled climate models. The analysis is based on a recently proposed description of the SPG dynamics that found the circulation to be potentially bistable due to a positive feedback mechanism including salt transport and enhanced deep convection in the SPG center. We employ a statistical method to identify multiple equilibria in time series that are subject to strong noise and analyze composite fields to assess whether the bistability results from the hypothesized feedback mechanism. Because noise dominates the time series in most models, multiple circulation modes can unambiguously be detected in only six models. Four of these six models confirm that the intensification is caused by the positive feedback mechanism.
Resumo:
BACKGROUND: Risk factors and outcomes of bronchial stricture after lung transplantation are not well defined. An association between acute rejection and development of stricture has been suggested in small case series. We evaluated this relationship using a large national registry. METHODS: All lung transplantations between April 1994 and December 2008 per the United Network for Organ Sharing (UNOS) database were analyzed. Generalized linear models were used to determine the association between early rejection and development of stricture after adjusting for potential confounders. The association of stricture with postoperative lung function and overall survival was also evaluated. RESULTS: Nine thousand three hundred thirty-five patients were included for analysis. The incidence of stricture was 11.5% (1,077/9,335), with no significant change in incidence during the study period (P=0.13). Early rejection was associated with a significantly greater incidence of stricture (adjusted odds ratio [AOR], 1.40; 95% confidence interval [CI], 1.22-1.61; p<0.0001). Male sex, restrictive lung disease, and pretransplantation requirement for hospitalization were also associated with stricture. Those who experienced stricture had a lower postoperative peak percent predicted forced expiratory volume at 1 second (FEV1) (median 74% versus 86% for bilateral transplants only; p<0.0001), shorter unadjusted survival (median 6.09 versus 6.82 years; p<0.001) and increased risk of death after adjusting for potential confounders (adjusted hazard ratio 1.13; 95% CI, 1.03-1.23; p=0.007). CONCLUSIONS: Early rejection is associated with an increased incidence of stricture. Recipients with stricture demonstrate worse postoperative lung function and survival. Prospective studies may be warranted to further assess causality and the potential for coordinated rejection and stricture surveillance strategies to improve postoperative outcomes.
Resumo:
Within the context of exoplanetary atmospheres, we present a comprehensive linear analysis of forced, damped, magnetized shallow water systems, exploring the effects of dimensionality, geometry (Cartesian, pseudo-spherical, and spherical), rotation, magnetic tension, and hydrodynamic and magnetic sources of friction. Across a broad range of conditions, we find that the key governing equation for atmospheres and quantum harmonic oscillators are identical, even when forcing (stellar irradiation), sources of friction (molecular viscosity, Rayleigh drag, and magnetic drag), and magnetic tension are included. The global atmospheric structure is largely controlled by a single key parameter that involves the Rossby and Prandtl numbers. This near-universality breaks down when either molecular viscosity or magnetic drag acts non-uniformly across latitude or a poloidal magnetic field is present, suggesting that these effects will introduce qualitative changes to the familiar chevron-shaped feature witnessed in simulations of atmospheric circulation. We also find that hydrodynamic and magnetic sources of friction have dissimilar phase signatures and affect the flow in fundamentally different ways, implying that using Rayleigh drag to mimic magnetic drag is inaccurate. We exhaustively lay down the theoretical formalism (dispersion relations, governing equations, and time-dependent wave solutions) for a broad suite of models. In all situations, we derive the steady state of an atmosphere, which is relevant to interpreting infrared phase and eclipse maps of exoplanetary atmospheres. We elucidate a pinching effect that confines the atmospheric structure to be near the equator. Our suite of analytical models may be used to develop decisively physical intuition and as a reference point for three-dimensional magnetohydrodynamic simulations of atmospheric circulation.
Resumo:
Theoretical models predict lognormal species abundance distributions (SADs) in stable and productive environments, with log-series SADs in less stable, dispersal driven communities. We studied patterns of relative species abundances of perennial vascular plants in global dryland communities to: (i) assess the influence of climatic and soil characteristics on the observed SADs, (ii) infer how environmental variability influences relative abundances, and (iii) evaluate how colonisation dynamics and environmental filters shape abundance distributions. We fitted lognormal and log-series SADs to 91 sites containing at least 15 species of perennial vascular plants. The dependence of species relative abundances on soil and climate variables was assessed using general linear models. Irrespective of habitat type and latitude, the majority of the SADs (70.3%) were best described by a lognormal distribution. Lognormal SADs were associated with low annual precipitation, higher aridity, high soil carbon content, and higher variability of climate variables and soil nitrate. Our results do not corroborate models predicting the prevalence of log-series SADs in dryland communities. As lognormal SADs were particularly associated with sites with drier conditions and a higher environmental variability, we reject models linking lognormality to environmental stability and high productivity conditions. Instead our results point to the prevalence of lognormal SADs in heterogeneous environments, allowing for more evenly distributed plant communities, or in stressful ecosystems, which are generally shaped by strong habitat filters and limited colonisation. This suggests that drylands may be resilient to environmental changes because the many species with intermediate relative abundances could take over ecosystem functioning if the environment becomes suboptimal for dominant species.
Resumo:
The first manuscript, entitled "Time-Series Analysis as Input for Clinical Predictive Modeling: Modeling Cardiac Arrest in a Pediatric ICU" lays out the theoretical background for the project. There are several core concepts presented in this paper. First, traditional multivariate models (where each variable is represented by only one value) provide single point-in-time snapshots of patient status: they are incapable of characterizing deterioration. Since deterioration is consistently identified as a precursor to cardiac arrests, we maintain that the traditional multivariate paradigm is insufficient for predicting arrests. We identify time series analysis as a method capable of characterizing deterioration in an objective, mathematical fashion, and describe how to build a general foundation for predictive modeling using time series analysis results as latent variables. Building a solid foundation for any given modeling task involves addressing a number of issues during the design phase. These include selecting the proper candidate features on which to base the model, and selecting the most appropriate tool to measure them. We also identified several unique design issues that are introduced when time series data elements are added to the set of candidate features. One such issue is in defining the duration and resolution of time series elements required to sufficiently characterize the time series phenomena being considered as candidate features for the predictive model. Once the duration and resolution are established, there must also be explicit mathematical or statistical operations that produce the time series analysis result to be used as a latent candidate feature. In synthesizing the comprehensive framework for building a predictive model based on time series data elements, we identified at least four classes of data that can be used in the model design. The first two classes are shared with traditional multivariate models: multivariate data and clinical latent features. Multivariate data is represented by the standard one value per variable paradigm and is widely employed in a host of clinical models and tools. These are often represented by a number present in a given cell of a table. Clinical latent features derived, rather than directly measured, data elements that more accurately represent a particular clinical phenomenon than any of the directly measured data elements in isolation. The second two classes are unique to the time series data elements. The first of these is the raw data elements. These are represented by multiple values per variable, and constitute the measured observations that are typically available to end users when they review time series data. These are often represented as dots on a graph. The final class of data results from performing time series analysis. This class of data represents the fundamental concept on which our hypothesis is based. The specific statistical or mathematical operations are up to the modeler to determine, but we generally recommend that a variety of analyses be performed in order to maximize the likelihood that a representation of the time series data elements is produced that is able to distinguish between two or more classes of outcomes. The second manuscript, entitled "Building Clinical Prediction Models Using Time Series Data: Modeling Cardiac Arrest in a Pediatric ICU" provides a detailed description, start to finish, of the methods required to prepare the data, build, and validate a predictive model that uses the time series data elements determined in the first paper. One of the fundamental tenets of the second paper is that manual implementations of time series based models are unfeasible due to the relatively large number of data elements and the complexity of preprocessing that must occur before data can be presented to the model. Each of the seventeen steps is analyzed from the perspective of how it may be automated, when necessary. We identify the general objectives and available strategies of each of the steps, and we present our rationale for choosing a specific strategy for each step in the case of predicting cardiac arrest in a pediatric intensive care unit. Another issue brought to light by the second paper is that the individual steps required to use time series data for predictive modeling are more numerous and more complex than those used for modeling with traditional multivariate data. Even after complexities attributable to the design phase (addressed in our first paper) have been accounted for, the management and manipulation of the time series elements (the preprocessing steps in particular) are issues that are not present in a traditional multivariate modeling paradigm. In our methods, we present the issues that arise from the time series data elements: defining a reference time; imputing and reducing time series data in order to conform to a predefined structure that was specified during the design phase; and normalizing variable families rather than individual variable instances. The final manuscript, entitled: "Using Time-Series Analysis to Predict Cardiac Arrest in a Pediatric Intensive Care Unit" presents the results that were obtained by applying the theoretical construct and its associated methods (detailed in the first two papers) to the case of cardiac arrest prediction in a pediatric intensive care unit. Our results showed that utilizing the trend analysis from the time series data elements reduced the number of classification errors by 73%. The area under the Receiver Operating Characteristic curve increased from a baseline of 87% to 98% by including the trend analysis. In addition to the performance measures, we were also able to demonstrate that adding raw time series data elements without their associated trend analyses improved classification accuracy as compared to the baseline multivariate model, but diminished classification accuracy as compared to when just the trend analysis features were added (ie, without adding the raw time series data elements). We believe this phenomenon was largely attributable to overfitting, which is known to increase as the ratio of candidate features to class examples rises. Furthermore, although we employed several feature reduction strategies to counteract the overfitting problem, they failed to improve the performance beyond that which was achieved by exclusion of the raw time series elements. Finally, our data demonstrated that pulse oximetry and systolic blood pressure readings tend to start diminishing about 10-20 minutes before an arrest, whereas heart rates tend to diminish rapidly less than 5 minutes before an arrest.
Resumo:
Inter-individual variation in diet within generalist animal populations is thought to be a widespread phenomenon but its potential causes are poorly known. Inter-individual variation can be amplified by the availability and use of allochthonous resources, i.e., resources coming from spatially distinct ecosystems. Using a wild population of arctic fox as a study model, we tested hypotheses that could explain variation in both population and individual isotopic niches, used here as proxy for the trophic niche. The arctic fox is an opportunistic forager, dwelling in terrestrial and marine environments characterized by strong spatial (arctic-nesting birds) and temporal (cyclic lemmings) fluctuations in resource abundance. First, we tested the hypothesis that generalist foraging habits, in association with temporal variation in prey accessibility, should induce temporal changes in isotopic niche width and diet. Second, we investigated whether within-population variation in the isotopic niche could be explained by individual characteristics (sex and breeding status) and environmental factors (spatiotemporal variation in prey availability). We addressed these questions using isotopic analysis and Bayesian mixing models in conjunction with linear mixed-effects models. We found that: i) arctic fox populations can simultaneously undergo short-term (i.e., within a few months) reduction in both isotopic niche width and inter-individual variability in isotopic ratios, ii) individual isotopic ratios were higher and more representative of a marine-based diet for non-breeding than breeding foxes early in spring, and iii) lemming population cycles did not appear to directly influence the diet of individual foxes after taking their breeding status into account. However, lemming abundance was correlated to proportion of breeding foxes, and could thus indirectly affect the diet at the population scale.
Resumo:
Within the regression framework, we show how different levels of nonlinearity influence the instantaneous firing rate prediction of single neurons. Nonlinearity can be achieved in several ways. In particular, we can enrich the predictor set with basis expansions of the input variables (enlarging the number of inputs) or train a simple but different model for each area of the data domain. Spline-based models are popular within the first category. Kernel smoothing methods fall into the second category. Whereas the first choice is useful for globally characterizing complex functions, the second is very handy for temporal data and is able to include inner-state subject variations. Also, interactions among stimuli are considered. We compare state-of-the-art firing rate prediction methods with some more sophisticated spline-based nonlinear methods: multivariate adaptive regression splines and sparse additive models. We also study the impact of kernel smoothing. Finally, we explore the combination of various local models in an incremental learning procedure. Our goal is to demonstrate that appropriate nonlinearity treatment can greatly improve the results. We test our hypothesis on both synthetic data and real neuronal recordings in cat primary visual cortex, giving a plausible explanation of the results from a biological perspective.
Resumo:
Assessing wind conditions on complex terrain has become a hard task as terrain complexity increases. That is why there is a need to extrapolate in a reliable manner some wind parameters that determine wind farms viability such as annual average wind speed at all hub heights as well as turbulence intensities. The development of these tasks began in the early 90´s with the widely used linear model WAsP and WAsP Engineering especially designed for simple terrain with remarkable results on them but not so good on complex orographies. Simultaneously non-linearized Navier Stokes solvers have been rapidly developed in the last decade through CFD (Computational Fluid Dynamics) codes allowing simulating atmospheric boundary layer flows over steep complex terrain more accurately reducing uncertainties. This paper describes the features of these models by validating them through meteorological masts installed in a highly complex terrain. The study compares the results of the mentioned models in terms of wind speed and turbulence intensity.
Resumo:
Cyclic fluctuations of the atmospheric temperature on the dam site, of the water temperature in the reservoir and of the intensity of solar radiation on the faces of the dam cause significant stresses in the body of concrete dams. These stresses can be evaluated first by introducing in analysis models a linear temperature distribution statically equivalent to the real temperature distribution in the dam; the stress valúes obtained from this first step must be complemented (especially in the área of dam faces) with the stress valúes resuiting from the difference between the real temperature law and the linear law at each node. In the case of arch gravity dams, and because of their characteristics of arch dam featuring a thick section, both types of temperature-induced stresses are of similar importance. Thermal stress valúes are directly linked to a series of factors: atmospheric and water temperature and intensity of solar radiation at dam site, site latitude, azimuth of the dam, as well as geometrical characteristics of the dam and thermal properties of concrete. This thesis first presents a complete study of the physical phenomenon of heat exchange between the environment and the dam itself, and establishes the participation scheme of all parameters involved in the problem considered. A detailed documental review of available methods and techniques is then carried out both for the estimation of environmental thermal loads and for the evaluation of the stresses induced by these loads. Variation ranges are also established for the main parameters. The definition of the geometrical parameters of the dam is provided based on the description of a wide set of arch gravity dams built in Spain and abroad. As a practical reference of the parameters defining the thermal action of the environment, a set of zones, in which thermal parameters reach homogeneous valúes, was established for Spain. The mean valué and variation range of atmospheric temperature were then determined for each zone, based on a series of historical valúes. Summer and winter temperature increases caused by solar radiation were also defined for each zone. Since the hypothesis of thermal stratification in the reservoir has been considered, máximum and mínimum temperature valúes reached at the bottom of the reservoir were determined for each climatic zone, as well as the law of temperature variation in function of depth. Various dam-and-foundation configurations were analysed by means of finite element 3D models, in which the dam and foundation were each submitted to different load combinations. The seasonal thermal behaviour of sections of variable thickness was analysed through the application of numerical techniques to one-dimensional models. Contrasting the results of both analyses led to conclusions on the influence of environmental thermal action on the stress conditions of the structure. Las oscilaciones periódicas de la temperatura ambiente en el emplazamiento y de la temperatura del agua en el embalse, así como de la incidencia de la radiación solar sobre los paramentos de la presa, son causa de tensiones importantes en el cuerpo de las presas de hormigón. Estas tensiones pueden ser evaluadas en primer lugar introduciendo en los modelos tridimensionales de análisis, distribuciones lineales de temperatura estáticamente equivalentes a las correspondientes distribuciones reales en el cuerpo de la presa; las tensiones así obtenidas han de complementarse (sobre todo en las cercanías de los paramentos) con tensiones cuyo origen está en la temperatura diferencia entre la ley real y la lineal en cada punto. En el caso de las presas arco-gravedad y en razón de su doble característica de presas arco y de sección gruesa, ambas componentes de la tensión inducida por la temperatura son de magnitud similar. Los valores de estas tensiones de origen térmico están directamente relacionados con la temperatura del emplazamiento y del embalse, con la intensidad de la insolación, con la latitud y el azimut de la presa, con las características geométricas de la estructura y con las propiedades térmicas del hormigón. En esta tesis se realiza, en primer lugar, un estudio completo del fenómeno físico del intercambio de calor entre el medio ambiente y el cuerpo de la presa, estableciendo el mecanismo de participación de todos los parámetros que configuran el problema. En segundo lugar se realiza a cabo una revisión documental detallada de los métodos y técnicas utilizables tanto en la estimación de las cargas térmicas ambientales como en la evaluación de las tensiones inducidas por dichas cargas. En tercer lugar se establecen rangos de variación para los principales parámetros que configuran el problema. Los parámetros geométricos de la presa se definen a partir de la descripción de un amplio conjunto de presas arco-gravedad tanto españolas como del resto del mundo. Como referencia práctica de los parámetros que definen la acción térmica ambiental se establecen en España un conjunto de zonas caracterizadas por que, en cada una de ellas, los parámetros térmicos alcanzan valores homogéneos. Así, y en base a series de valores históricos, se establecen la media y la amplitud de la variación anual de la temperatura ambiental en cada una de las zonas. Igualmente, se han definido para cada zona los incrementos de temperatura que, en invierno y en verano, produce la insolación. En relación con el agua del embalse y en la hipótesis de estratificación térmica de este, se han definido los valores, aplicables en cada una de las zonas, de las temperaturas máxima y mínima en el fondo así como la ley de variación de la temperatura con la profundidad. Utilizando modelos tridimensionales de elementos finitos se analizan diferentes configuraciones de la presa y la cimentación sometidas, cada una de ellas, a diferentes combinaciones de carga. Aplicando técnicas numéricas a modelos unidimensionales se analiza el comportamiento térmico temporal de secciones de espesor variable. Considerando conjuntamente los resultados de los análisis anteriores se obtienen conclusiones parametrizadas de detalle sobre la influencia que tiene en el estado tensional de la estructura la consideración de la acción térmica ambiental.
Resumo:
A series of motion compensation algorithms is run on the challenge data including methods that optimize only a linear transformation, or a non-linear transformation, or both – first a linear and then a non-linear transformation. Methods that optimize a linear transformation run an initial segmentation of the area of interest around the left myocardium by means of an independent component analysis (ICA) (ICA-*). Methods that optimize non-linear transformations may run directly on the full images, or after linear registration. Non-linear motion compensation approaches applied include one method that only registers pairs of images in temporal succession (SERIAL), one method that registers all image to one common reference (AllToOne), one method that was designed to exploit quasi-periodicity in free breathing acquired image data and was adapted to also be usable to image data acquired with initial breath-hold (QUASI-P), a method that uses ICA to identify the motion and eliminate it (ICA-SP), and a method that relies on the estimation of a pseudo ground truth (PG) to guide the motion compensation.