983 resultados para Term Earthquake Prediction
Resumo:
The role of the electrophysiologic (EP) study for risk stratification in patients with arrhythmogenic right ventricular cardiomyopathy is controversial. We investigated the role of inducible sustained monomorphic ventricular tachycardia (SMVT) for the prediction of an adverse outcome (AO), defined as the occurrence of cardiac death, heart transplantation, sudden cardiac death, ventricular fibrillation, ventricular tachycardia with hemodynamic compromise or syncope. Of 62 patients who fulfilled the 2010 Arrhythmogenic Right Ventricular Cardiomyopathy Task Force criteria and underwent an EP study, 30 (48%) experienced an adverse outcome during a median follow-up of 9.8 years. SMVT was inducible in 34 patients (55%), 22 (65%) of whom had an adverse outcome. In contrast, in 28 patients without inducible SMVT, 8 (29%) had an adverse outcome. Kaplan-Meier analysis showed an event-free survival benefit for patients without inducible SMVT (log-rank p = 0.008) with a cumulative survival free of an adverse outcome of 72% (95% confidence interval [CI] 56% to 92%) in the group without inducible SMVT compared to 26% (95% CI 14% to 50%) in the other group after 10 years. The inducibility of SMVT during the EP study (hazard ratio [HR] 2.99, 95% CI 1.23 to 7.27), nonadherence (HR 2.74, 95% CI 1.3 to 5.77), and heart failure New York Heart Association functional class II and III (HR 2.25, 95% CI 1.04 to 4.87) were associated with an adverse outcome on univariate Cox regression analysis. The inducibility of SMVT (HR 2.52, 95% CI 1.03 to 6.16, p = 0.043) and nonadherence (HR 2.34, 95% CI 1.1 to 4.99, p = 0.028) remained as significant predictors on multivariate analysis. This long-term observational data suggest that SMVT inducibility during EP study might predict an adverse outcome in patients with arrhythmogenic right ventricular cardiomyopathy, advocating a role for EP study in risk stratification.
Resumo:
Metallic catcher foils have been investigated on their thermal release capabilities for future superheavy element studies. These catcher materials shall serve as connection between production and chemical investigation of superheavy elements (SHE) at vacuum conditions. The diffusion constants and activation energies of diffusion have been extrapolated for various catcher materials using an atomic volume based model. Release rates can now be estimated for predefined experimental conditions using the determined diffusion values. The potential release behavior of the volatile SHE Cn (E112), E113, Fl (E114), E115, and Lv (E116) from polycrystalline, metallic foils of Ni, Y, Zr, Nb, Mo, Hf, Ta, and W is predicted. Example calculations showed that Zr is the best suited material in terms of on-line release efficiency and long-term operation stability. If higher temperatures up to 2773 K are applicable, tungsten is suggested to be the material of choice for such experiments.
Resumo:
Numerical simulations based on plans for a deep geothermal system in Basel, Switzerland are used here to understand chemical processes that occur in an initially dry granitoid reservoir during hydraulic stimulation and long-term water circulation to extract heat. An important question regarding the sustainability of such enhanced geothermal systems (EGS), is whether water–rock reactions will eventually lead to clogging of flow paths in the reservoir and thereby reduce or even completely block fluid throughput. A reactive transport model allows the main chemical reactions to be predicted and the resulting evolution of porosity to be tracked over the expected 30-year operational lifetime of the system. The simulations show that injection of surface water to stimulate fracture permeability in the monzogranite reservoir at 190 °C and 5000 m depth induces redox reactions between the oxidised surface water and the reduced wall rock. Although new calcite, chlorite, hematite and other minerals precipitate near the injection well, their volumes are low and more than compensated by those of the dissolving wall-rock minerals. Thus, during stimulation, reduction of injectivity by mineral precipitation is unlikely. During the simulated long-term operation of the system, the main mineral reactions are the hydration and albitization of plagioclase, the alteration of hornblende to an assemblage of smectites and chlorites and of primary K-feldspar to muscovite and microcline. Within a closed-system doublet, the composition of the circulated fluid changes only slightly during its repeated passage through the reservoir, as the wall rock essentially undergoes isochemical recrystallization. Even after 30 years of circulation, the calculations show that porosity is reduced by only ∼0.2%, well below the expected fracture porosity induced by stimulation. This result suggests that permeability reduction owing to water–rock interaction is unlikely to jeopardize the long-term operation of deep, granitoid-hosted EGS systems. A peculiarity at Basel is the presence of anhydrite as fracture coatings at ∼5000 m depth. Simulated exposure of the circulating fluid to anhydrite induces a stronger redox disequilibrium in the reservoir, driving dissolution of ferrous minerals and precipitation of ferric smectites, hematite and pyrite. However, even in this scenario the porosity reduction is at most 0.5%, a value which is unproblematic for sustainable fluid circulation through the reservoir.
Resumo:
Brain tumor is one of the most aggressive types of cancer in humans, with an estimated median survival time of 12 months and only 4% of the patients surviving more than 5 years after disease diagnosis. Until recently, brain tumor prognosis has been based only on clinical information such as tumor grade and patient age, but there are reports indicating that molecular profiling of gliomas can reveal subgroups of patients with distinct survival rates. We hypothesize that coupling molecular profiling of brain tumors with clinical information might improve predictions of patient survival time and, consequently, better guide future treatment decisions. In order to evaluate this hypothesis, the general goal of this research is to build models for survival prediction of glioma patients using DNA molecular profiles (U133 Affymetrix gene expression microarrays) along with clinical information. First, a predictive Random Forest model is built for binary outcomes (i.e. short vs. long-term survival) and a small subset of genes whose expression values can be used to predict survival time is selected. Following, a new statistical methodology is developed for predicting time-to-death outcomes using Bayesian ensemble trees. Due to a large heterogeneity observed within prognostic classes obtained by the Random Forest model, prediction can be improved by relating time-to-death with gene expression profile directly. We propose a Bayesian ensemble model for survival prediction which is appropriate for high-dimensional data such as gene expression data. Our approach is based on the ensemble "sum-of-trees" model which is flexible to incorporate additive and interaction effects between genes. We specify a fully Bayesian hierarchical approach and illustrate our methodology for the CPH, Weibull, and AFT survival models. We overcome the lack of conjugacy using a latent variable formulation to model the covariate effects which decreases computation time for model fitting. Also, our proposed models provides a model-free way to select important predictive prognostic markers based on controlling false discovery rates. We compare the performance of our methods with baseline reference survival methods and apply our methodology to an unpublished data set of brain tumor survival times and gene expression data, selecting genes potentially related to the development of the disease under study. A closing discussion compares results obtained by Random Forest and Bayesian ensemble methods under the biological/clinical perspectives and highlights the statistical advantages and disadvantages of the new methodology in the context of DNA microarray data analysis.
Resumo:
Long Term Evolution (LTE) represents the fourth generation (4G) technology which is capable of providing high data rates as well as support of high speed mobility. The EU FP7 Mobile Cloud Networking (MCN) project integrates the use of cloud computing concepts in LTE mobile networks in order to increase LTE's performance. In this way a shared distributed virtualized LTE mobile network is built that can optimize the utilization of virtualized computing, storage and network resources and minimize communication delays. Two important features that can be used in such a virtualized system to improve its performance are the user mobility and bandwidth prediction. This paper introduces the architecture and challenges that are associated with user mobility and bandwidth prediction approaches in virtualized LTE systems.
Resumo:
BACKGROUND Heart failure with preserved ejection fraction (HFpEF) represents a growing health burden associated with substantial mortality and morbidity. Consequently, risk prediction is of highest importance. Endothelial dysfunction has been recently shown to play an important role in the complex pathophysiology of HFpEF. We therefore aimed to assess von Willebrand factor (vWF), a marker of endothelial damage, as potential biomarker for risk assessment in patients with HFpEF. METHODS AND RESULTS Concentrations of vWF were assessed in 457 patients with HFpEF enrolled as part of the LUdwigshafen Risk and Cardiovascular Health (LURIC) study. All-cause mortality was observed in 40% of patients during a median follow-up time of 9.7 years. vWF significantly predicted mortality with a hazard ratio (HR) per increase of 1 SD of 1.45 (95% confidence interval, 1.26-1.68; P<0.001) and remained a significant predictor after adjustment for age, sex, body mass index, N-terminal pro-B-type natriuretic peptide (NT-proBNP), renal function, and frequent HFpEF-related comorbidities (adjusted HR per 1 SD, 1.22; 95% confidence interval, 1.05-1.42; P=0.001). Most notably, vWF showed additional prognostic value beyond that achievable with NT-proBNP indicated by improvements in C-Statistic (vWF×NT-proBNP: 0.65 versus NT-proBNP: 0.63; P for comparison, 0.004) and category-free net reclassification index (37.6%; P<0.001). CONCLUSIONS vWF is an independent predictor of long-term outcome in patients with HFpEF, which is in line with endothelial dysfunction as potential mediator in the pathophysiology of HFpEF. In particular, combined assessment of vWF and NT-proBNP improved risk prediction in this vulnerable group of patients.
Resumo:
The value of electrocardiographic findings predicting adverse outcome in patients with arrhythmogenic right ventricular dysplasia (ARVD) is not well known. We hypothesized that ventricular depolarization and repolarization abnormalities on the 12-lead surface electrocardiogram (ECG) predict adverse outcome in patients with ARVD. ECGs of 111 patients screened for the 2010 ARVD Task Force Criteria from 3 Swiss tertiary care centers were digitized and analyzed with a digital caliper by 2 independent observers blinded to the outcome. ECGs were compared in 2 patient groups: (1) patients with major adverse cardiovascular events (MACE: a composite of cardiac death, heart transplantation, survived sudden cardiac death, ventricular fibrillation, sustained ventricular tachycardia, or arrhythmic syncope) and (2) all remaining patients. A total of 51 patients (46%) experienced MACE during a follow-up period with median of 4.6 years (interquartile range 1.8 to 10.0). Kaplan-Meier analysis revealed reduced times to MACE for patients with repolarization abnormalities according to Task Force Criteria (p = 0.009), a precordial QRS amplitude ratio (∑QRS mV V1 to V3/∑QRS mV V1 to V6) of ≤ 0.48 (p = 0.019), and QRS fragmentation (p = 0.045). In multivariable Cox regression, a precordial QRS amplitude ratio of ≤ 0.48 (hazard ratio [HR] 2.92, 95% confidence interval [CI] 1.39 to 6.15, p = 0.005), inferior leads T-wave inversions (HR 2.44, 95% CI 1.15 to 5.18, p = 0.020), and QRS fragmentation (HR 2.65, 95% CI 1.1 to 6.34, p = 0.029) remained as independent predictors of MACE. In conclusion, in this multicenter, observational, long-term study, electrocardiographic findings were useful for risk stratification in patients with ARVD, with repolarization criteria, inferior leads TWI, a precordial QRS amplitude ratio of ≤ 0.48, and QRS fragmentation constituting valuable variables to predict adverse outcome.
Resumo:
Objectives: To examine the predictive value of early improvement for short- and long-term outcome in the treatment of depressive female inpatients and to explore the influence of comorbid disorders (CD). Methods: Archival data of a naturalistic sample of 277 female inpatients diagnosed with a depressive disorder was analyzed assessing the BDI at baseline, after 20 days and 30 days, posttreatment, and after 3 to 6 months at follow-up. Early improvement, defined as a decrease in the BDI score of at least 30% after 20 and after 30 days, and CD were analyzed using binary logistic regression. Results: Both early improvement definitions were predictive of remission at posttreatment. Early improvement after 30 days showed a sustained treatment effect in the follow-up phase, whereas early improvement after 20 days failed to show a persistent effect regarding remission at follow-up. CD were not significantly related neither at posttreatment nor at follow-up. At no time point CD moderated the prediction by early improvement. Conclusions: We show that early improvement is a valid predictor for short-term remission and at follow-up in an inpatient setting. CD did not predict outcome. Further studies are needed to identify patient subgroups amenable to more tailored treatments.
Resumo:
OBJECTIVE Reliable tools to predict long-term outcome among patients with well compensated advanced liver disease due to chronic HCV infection are lacking. DESIGN Risk scores for mortality and for cirrhosis-related complications were constructed with Cox regression analysis in a derivation cohort and evaluated in a validation cohort, both including patients with chronic HCV infection and advanced fibrosis. RESULTS In the derivation cohort, 100/405 patients died during a median 8.1 (IQR 5.7-11.1) years of follow-up. Multivariate Cox analyses showed age (HR=1.06, 95% CI 1.04 to 1.09, p<0.001), male sex (HR=1.91, 95% CI 1.10 to 3.29, p=0.021), platelet count (HR=0.91, 95% CI 0.87 to 0.95, p<0.001) and log10 aspartate aminotransferase/alanine aminotransferase ratio (HR=1.30, 95% CI 1.12 to 1.51, p=0.001) were independently associated with mortality (C statistic=0.78, 95% CI 0.72 to 0.83). In the validation cohort, 58/296 patients with cirrhosis died during a median of 6.6 (IQR 4.4-9.0) years. Among patients with estimated 5-year mortality risks <5%, 5-10% and >10%, the observed 5-year mortality rates in the derivation cohort and validation cohort were 0.9% (95% CI 0.0 to 2.7) and 2.6% (95% CI 0.0 to 6.1), 8.1% (95% CI 1.8 to 14.4) and 8.0% (95% CI 1.3 to 14.7), 21.8% (95% CI 13.2 to 30.4) and 20.9% (95% CI 13.6 to 28.1), respectively (C statistic in validation cohort = 0.76, 95% CI 0.69 to 0.83). The risk score for cirrhosis-related complications also incorporated HCV genotype (C statistic = 0.80, 95% CI 0.76 to 0.83 in the derivation cohort; and 0.74, 95% CI 0.68 to 0.79 in the validation cohort). CONCLUSIONS Prognosis of patients with chronic HCV infection and compensated advanced liver disease can be accurately assessed with risk scores including readily available objective clinical parameters.
Resumo:
BACKGROUND Multiple scores have been proposed to stratify bleeding risk, but their value to guide dual antiplatelet therapy duration has never been appraised. We compared the performance of the CRUSADE (Can Rapid Risk Stratification of Unstable Angina Patients Suppress Adverse Outcomes With Early Implementation of the ACC/AHA Guidelines), ACUITY (Acute Catheterization and Urgent Intervention Triage Strategy), and HAS-BLED (Hypertension, Abnormal Renal/Liver Function, Stroke, Bleeding History or Predisposition, Labile INR, Elderly, Drugs/Alcohol Concomitantly) scores in 1946 patients recruited in the Prolonging Dual Antiplatelet Treatment After Grading Stent-Induced Intimal Hyperplasia Study (PRODIGY) and assessed hemorrhagic and ischemic events in the 24- and 6-month dual antiplatelet therapy groups. METHODS AND RESULTS Bleeding score performance was assessed with a Cox regression model and C statistics. Discriminative and reclassification power was assessed with net reclassification improvement and integrated discrimination improvement. The C statistic was similar between the CRUSADE score (area under the curve 0.71) and ACUITY (area under the curve 0.68), and higher than HAS-BLED (area under the curve 0.63). CRUSADE, but not ACUITY, improved reclassification (net reclassification index 0.39, P=0.005) and discrimination (integrated discrimination improvement index 0.0083, P=0.021) of major bleeding compared with HAS-BLED. Major bleeding and transfusions were higher in the 24- versus 6-month dual antiplatelet therapy groups in patients with a CRUSADE score >40 (hazard ratio for bleeding 2.69, P=0.035; hazard ratio for transfusions 4.65, P=0.009) but not in those with CRUSADE score ≤40 (hazard ratio for bleeding 1.50, P=0.25; hazard ratio for transfusions 1.37, P=0.44), with positive interaction (Pint=0.05 and Pint=0.01, respectively). The number of patients with high CRUSADE scores needed to treat for harm for major bleeding and transfusion were 17 and 15, respectively, with 24-month rather than 6-month dual antiplatelet therapy; corresponding figures in the overall population were 67 and 71, respectively. CONCLUSIONS Our analysis suggests that the CRUSADE score predicts major bleeding similarly to ACUITY and better than HAS BLED in an all-comer population with percutaneous coronary intervention and potentially identifies patients at higher risk of hemorrhagic complications when treated with a long-term dual antiplatelet therapy regimen. CLINICAL TRIAL REGISTRATION URL: http://clinicaltrials.gov. Unique identifier: NCT00611286.
Resumo:
Patent and trademark offices which run according to principles of new management have an inherent need for dependable forecasting data in planning capacity and service levels. The ability of the Spanish Office of Patents and Trademarks to carry out efficient planning of its resource needs requires the use of methods which allow it to predict the changes in the number of patent and trademark applications at different time horizons. The approach for the prediction of time series of Spanish patents and trademarks applications (1979e2009) was based on the use of different techniques of time series prediction in a short-term horizon. The methods used can be grouped into two specifics areas: regression models of trends and time series models. The results of this study show that it is possible to model the series of patents and trademarks applications with different models, especially ARIMA, with satisfactory model adjustment and relatively low error.
Resumo:
The use of seismic hysteretic dampers for passive control is increasing exponentially in recent years for both new and existing buildings. In order to utilize hysteretic dampers within a structural system, it is of paramount importance to have simplified design procedures based upon knowledge gained from theoretical studies and validated with experimental results. Non-linear Static Procedures (NSPs) are presented as an alternative to the force-based methods more common nowadays. The application of NSPs to conventional structures has been well established; yet there is a lack of experimental information on how NSPs apply to systems with hysteretic dampers. In this research, several shaking table tests were conducted on two single bay and single story 1:2 scale structures with and without hysteretic dampers. The maximum response of the structure with dampers in terms of lateral displacement and base shear obtained from the tests was compared with the prediction provided by three well-known NSPs: (1) the improved version of the Capacity Spectrum Method (CSM) from FEMA 440; (2) the improved version of the Displacement Coefficient Method (DCM) from FEMA 440; and (3) the N2 Method implemented in Eurocode 8. In general, the improved version of the DCM and N2 methods are found to provide acceptable accuracy in prediction, but the CSM tends to underestimate the response.
Resumo:
Although most of the research on Cognitive Radio is focused on communication bands above the HF upper limit (30 MHz), Cognitive Radio principles can also be applied to HF communications to make use of the extremely scarce spectrum more efficiently. In this work we consider legacy users as primary users since these users transmit without resorting to any smart procedure, and our stations using the HFDVL (HF Data+Voice Link) architecture as secondary users. Our goal is to enhance an efficient use of the HF band by detecting the presence of uncoordinated primary users and avoiding collisions with them while transmitting in different HF channels using our broad-band HF transceiver. A model of the primary user activity dynamics in the HF band is developed in this work to make short-term predictions of the sojourn time of a primary user in the band and avoid collisions. It is based on Hidden Markov Models (HMM) which are a powerful tool for modelling stochastic random processes and are trained with real measurements of the 14 MHz band. By using the proposed HMM based model, the prediction model achieves an average 10.3% prediction error rate with one minute-long channel knowledge but it can be reduced when this knowledge is extended: with the previous 8 min knowledge, an average 5.8% prediction error rate is achieved. These results suggest that the resulting activity model for the HF band could actually be used to predict primary users activity and included in a future HF cognitive radio based station.
Resumo:
In the last years significant efforts have been devoted to the development of advanced data analysis tools to both predict the occurrence of disruptions and to investigate the operational spaces of devices, with the long term goal of advancing the understanding of the physics of these events and to prepare for ITER. On JET the latest generation of the disruption predictor called APODIS has been deployed in the real time network during the last campaigns with the new metallic wall. Even if it was trained only with discharges with the carbon wall, it has reached very good performance, with both missed alarms and false alarms in the order of a few percent (and strategies to improve the performance have already been identified). Since for the optimisation of the mitigation measures, predicting also the type of disruption is considered to be also very important, a new clustering method, based on the geodesic distance on a probabilistic manifold, has been developed. This technique allows automatic classification of an incoming disruption with a success rate of better than 85%. Various other manifold learning tools, particularly Principal Component Analysis and Self Organised Maps, are also producing very interesting results in the comparative analysis of JET and ASDEX Upgrade (AUG) operational spaces, on the route to developing predictors capable of extrapolating from one device to another.
Resumo:
La predicción de energía eólica ha desempeñado en la última década un papel fundamental en el aprovechamiento de este recurso renovable, ya que permite reducir el impacto que tiene la naturaleza fluctuante del viento en la actividad de diversos agentes implicados en su integración, tales como el operador del sistema o los agentes del mercado eléctrico. Los altos niveles de penetración eólica alcanzados recientemente por algunos países han puesto de manifiesto la necesidad de mejorar las predicciones durante eventos en los que se experimenta una variación importante de la potencia generada por un parque o un conjunto de ellos en un tiempo relativamente corto (del orden de unas pocas horas). Estos eventos, conocidos como rampas, no tienen una única causa, ya que pueden estar motivados por procesos meteorológicos que se dan en muy diferentes escalas espacio-temporales, desde el paso de grandes frentes en la macroescala a procesos convectivos locales como tormentas. Además, el propio proceso de conversión del viento en energía eléctrica juega un papel relevante en la ocurrencia de rampas debido, entre otros factores, a la relación no lineal que impone la curva de potencia del aerogenerador, la desalineación de la máquina con respecto al viento y la interacción aerodinámica entre aerogeneradores. En este trabajo se aborda la aplicación de modelos estadísticos a la predicción de rampas a muy corto plazo. Además, se investiga la relación de este tipo de eventos con procesos atmosféricos en la macroescala. Los modelos se emplean para generar predicciones de punto a partir del modelado estocástico de una serie temporal de potencia generada por un parque eólico. Los horizontes de predicción considerados van de una a seis horas. Como primer paso, se ha elaborado una metodología para caracterizar rampas en series temporales. La denominada función-rampa está basada en la transformada wavelet y proporciona un índice en cada paso temporal. Este índice caracteriza la intensidad de rampa en base a los gradientes de potencia experimentados en un rango determinado de escalas temporales. Se han implementado tres tipos de modelos predictivos de cara a evaluar el papel que juega la complejidad de un modelo en su desempeño: modelos lineales autorregresivos (AR), modelos de coeficientes variables (VCMs) y modelos basado en redes neuronales (ANNs). Los modelos se han entrenado en base a la minimización del error cuadrático medio y la configuración de cada uno de ellos se ha determinado mediante validación cruzada. De cara a analizar la contribución del estado macroescalar de la atmósfera en la predicción de rampas, se ha propuesto una metodología que permite extraer, a partir de las salidas de modelos meteorológicos, información relevante para explicar la ocurrencia de estos eventos. La metodología se basa en el análisis de componentes principales (PCA) para la síntesis de la datos de la atmósfera y en el uso de la información mutua (MI) para estimar la dependencia no lineal entre dos señales. Esta metodología se ha aplicado a datos de reanálisis generados con un modelo de circulación general (GCM) de cara a generar variables exógenas que posteriormente se han introducido en los modelos predictivos. Los casos de estudio considerados corresponden a dos parques eólicos ubicados en España. Los resultados muestran que el modelado de la serie de potencias permitió una mejora notable con respecto al modelo predictivo de referencia (la persistencia) y que al añadir información de la macroescala se obtuvieron mejoras adicionales del mismo orden. Estas mejoras resultaron mayores para el caso de rampas de bajada. Los resultados también indican distintos grados de conexión entre la macroescala y la ocurrencia de rampas en los dos parques considerados. Abstract One of the main drawbacks of wind energy is that it exhibits intermittent generation greatly depending on environmental conditions. Wind power forecasting has proven to be an effective tool for facilitating wind power integration from both the technical and the economical perspective. Indeed, system operators and energy traders benefit from the use of forecasting techniques, because the reduction of the inherent uncertainty of wind power allows them the adoption of optimal decisions. Wind power integration imposes new challenges as higher wind penetration levels are attained. Wind power ramp forecasting is an example of such a recent topic of interest. The term ramp makes reference to a large and rapid variation (1-4 hours) observed in the wind power output of a wind farm or portfolio. Ramp events can be motivated by a broad number of meteorological processes that occur at different time/spatial scales, from the passage of large-scale frontal systems to local processes such as thunderstorms and thermally-driven flows. Ramp events may also be conditioned by features related to the wind-to-power conversion process, such as yaw misalignment, the wind turbine shut-down and the aerodynamic interaction between wind turbines of a wind farm (wake effect). This work is devoted to wind power ramp forecasting, with special focus on the connection between the global scale and ramp events observed at the wind farm level. The framework of this study is the point-forecasting approach. Time series based models were implemented for very short-term prediction, this being characterised by prediction horizons up to six hours ahead. As a first step, a methodology to characterise ramps within a wind power time series was proposed. The so-called ramp function is based on the wavelet transform and it provides a continuous index related to the ramp intensity at each time step. The underlying idea is that ramps are characterised by high power output gradients evaluated under different time scales. A number of state-of-the-art time series based models were considered, namely linear autoregressive (AR) models, varying-coefficient models (VCMs) and artificial neural networks (ANNs). This allowed us to gain insights into how the complexity of the model contributes to the accuracy of the wind power time series modelling. The models were trained in base of a mean squared error criterion and the final set-up of each model was determined through cross-validation techniques. In order to investigate the contribution of the global scale into wind power ramp forecasting, a methodological proposal to identify features in atmospheric raw data that are relevant for explaining wind power ramp events was presented. The proposed methodology is based on two techniques: principal component analysis (PCA) for atmospheric data compression and mutual information (MI) for assessing non-linear dependence between variables. The methodology was applied to reanalysis data generated with a general circulation model (GCM). This allowed for the elaboration of explanatory variables meaningful for ramp forecasting that were utilized as exogenous variables by the forecasting models. The study covered two wind farms located in Spain. All the models outperformed the reference model (the persistence) during both ramp and non-ramp situations. Adding atmospheric information had a noticeable impact on the forecasting performance, specially during ramp-down events. Results also suggested different levels of connection between the ramp occurrence at the wind farm level and the global scale.