942 resultados para non-parametric background modeling


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Corporate executives closely monitor the accuracy of their hotels' occupancy fore- casts since important decisions are based upon these predictions. This study lists the criteria for selecting an appropriate error measure. It discusses several evaluation methods focusing on statistical significance tests and demonstrates the use of two adequate evaluation methods: Mincer- Zamowitz's efficiency test and Wilcoxon's Non-Parametric Matched-Pairs Signed- Ranks test.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In finance literature many economic theories and models have been proposed to explain and estimate the relationship between risk and return. Assuming risk averseness and rational behavior on part of the investor, the models are developed which are supposed to help in forming efficient portfolios that either maximize (minimize) the expected rate of return (risk) for a given level of risk (rates of return). One of the most used models to form these efficient portfolios is the Sharpe's Capital Asset Pricing Model (CAPM). In the development of this model it is assumed that the investors have homogeneous expectations about the future probability distribution of the rates of return. That is, every investor assumes the same values of the parameters of the probability distribution. Likewise financial volatility homogeneity is commonly assumed, where volatility is taken as investment risk which is usually measured by the variance of the rates of return. Typically the square root of the variance is used to define financial volatility, furthermore it is also often assumed that the data generating process is made of independent and identically distributed random variables. This again implies that financial volatility is measured from homogeneous time series with stationary parameters. In this dissertation, we investigate the assumptions of homogeneity of market agents and provide evidence for the case of heterogeneity in market participants' information, objectives, and expectations about the parameters of the probability distribution of prices as given by the differences in the empirical distributions corresponding to different time scales, which in this study are associated with different classes of investors, as well as demonstrate that statistical properties of the underlying data generating processes including the volatility in the rates of return are quite heterogeneous. In other words, we provide empirical evidence against the traditional views about homogeneity using non-parametric wavelet analysis on trading data, The results show heterogeneity of financial volatility at different time scales, and time-scale is one of the most important aspects in which trading behavior differs. In fact we conclude that heterogeneity as posited by the Heterogeneous Markets Hypothesis is the norm and not the exception.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes the implementation of a novel mitigation approach and subsequent adaptive management, designed to reduce the transfer of fine sediment in Glaisdale Beck; a small upland catchment in the UK. Hydro-meteorological and suspended sediment datasets are collected over a two year period spanning pre- and post-diversion periods in order to assess the impact of the channel reconfiguration scheme on the fluvial suspended sediment dynamics. Analysis of the river response demonstrates that the fluvial sediment system has become more restrictive with reduced fine sediment transfer. This is characterised by reductions in flow-weighted mean suspended sediment concentrations from 77.93 mg/l prior to mitigation, to 74.36 mg/l following the diversion. A Mann-Whitney U test found statistically significant differences (p < 0.001) between the pre- and post-monitoring median SSCs. Whilst application of one-way analysis of covariance (ANCOVA) on the coefficients of sediment rating curves developed before and after the diversion found statistically significant differences (p < 0.001), with both Log a and b coefficients becoming smaller following the diversion. Non-parametric analysis indicates a reduction in residuals through time (p < 0.001), with the developed LOWESS model over-predicting sediment concentrations as the channel stabilises. However, the channel is continuing to adjust to the reconfigured morphology, with evidence of a headward propagating knickpoint which has migrated 120 m at an exponentially decreasing rate over the last 7 years since diversion. The study demonstrates that channel reconfiguration can be effective in mitigating fine sediment flux in upland streams but the full value of this may take many years to achieve whilst the fluvial system, slowly readjusts.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The strategy research have been widespread for many years and, more recently, the process of formation of the strategies in the individual perspective has also gained attention in academia. Confirming this trend, the goal of this study is to discuss the process of formation of the strategies from an individual perspective based on the three dimensions of the strategic process (change, thinking and formation) proposed by De Wit and Meyer (2004). To this end, this exploratory-descriptive study used the factor analysis techniques, non-parametric correlation and linear regression to analyze data collected from the decision makers of the 93 retail in the industry of construction supplies in the Natal and metropolitan area. As a result, we have that the formation factors of the dimensions investigated were identified in the majority, thus confirming the existence of paradoxes in the strategic process, and that there is a relationship between logical thinking and deliberate formation with the hierarchical level of decision makers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Academic demands, new social context, new routines and decrease of the parental control, are factors that may influence the sleep pattern of freshman students at the University. Medical students from the Federal University of Rio Grande do Norte (UFRN) have a full-time course, subjects with high-level content, and, at the first semester, classes begin at 7 a.m. This group composed by young adults who still suffering with delayed sleep phase, common in adolescence, indicating that this class schedule can be inappropriate at this age. The reduction of nocturnal sleep during school days, and the attempt to recover sleep on free days – social jet lag (JLS), suggests that in the first semester, students suffer from high sleep pressure. High sleep pressure may reflect on cognitive tasks and performance. Therefore, the aim of this study was to investigate the relationship between sleep pressure and the academic profile of medical students from the first semester of UFRN, characterizing this population socio-demographically and investigating possible impacts on therestactivity rhytm and academic performance. A sample of 88 students, healthy men and women awswered the following questionnaires: Pittsburgh Sleep Quality (PSQI), Epworth Sleepiness Scale (ESS), Horne & Ostberg Chronotype (HO), Munich Chronotype (MCTQ) and “Health and Sleep” adapted. Actigraphy was used during 14 days to make actogramas and obtain non-parametric variables of the rest-activity rhythm and the grades of the morning schedule were used as academic performance. The JLS was used as a measure of sleep pressure. Statistics significance level was 95%. The population was sociodemographic homogeneous. Most students have healthy lifestyle, practice physical activity, use car to go to the university and take between 15 and 30 minutes for this route. Regarding CSV, most were classify as intermediate (38.6%) and evening (32%) chronotypes, needs to nap during the week, suffer daytime sleepiness and have poor sleep quality. 83% of the sample has at least 1h JLS, which led us to divide into two groups: Group <2h JLS (N = 44) and Group ≥ 2h JLS (N = 44). The groups have differences only in chronotype, showing that most evening individuals have more JLS, however, no differences were found in relation to sociodemographic aspect, rest-activity rhythm or academic performance. The homogeneity of the sample was limited to compare the groups, however, is alarming that students already present in the first half: JLG, poor sleep quality and excessive daytime sleepiness, which can be accentuated through the university years, with the emergence of night shifts and increased academic demand. Interventionsaddressingthe importance of good sleep habits and the change of the class start time are strategies aimed to improve student’s health.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this research was to investigate monthly climatological, seasonal, annual and interdecadal of the reference evapotranspiration (ETo) in Acre state in order to better understand its spatial and temporal variability and identify possible trends in the region. The study was conducted with data from Rio Branco municipalities, the state capital, Tarauacá and Cruzeiro do Sul considering a 30-year period (1985-2014), from monthly data from weather stations surface of the National Institute of Meteorology. The methodology was held, first, the consistency of meteorological data. Thus, it was made the gap filling in the time series by means of multivariate techniques. Subsequently were performed statistical tests trend (Mann-Kendall) and homogeneity, by Sen's estimator of the magnitude of this trend is estimated, as well as computational algorithms containing parametric and non-parametric tests for two samples to identify from that year the trend has become significant. Finally, analysis of variance technique (ANOVA) was adopted in order to verify whether there were significant differences in average annual evapotranspiration between locations. The indirect method of Penman-Montheith parameterized by FAO was used to calculate the ETo. The results of this work through examination of the descriptive statistics showed that the ETo the annual average was 3.80, 2.92 and 2.86 mm day-1 year, to Rio Branco, Tarauacá and Cruzeiro do Sul, respectively. Featuring quite remarkable seasonal pattern with a minimum in June and a maximum in October, with Rio Branco to town one with the strongest signal (amplitudes) on the other hand, the Southern Cross presented the highest variability among the studied locations. By ANOVA it was found that the average annual statistically different for a significance level of 1% between locations, but the annual average between Cruzeiro do Sul and Tarauacá no statistically significant differences. For the three locations, the 2000s was the one with the highest ETo values associated with warmer waters of the North Atlantic basin and the 80s to lower values, associated with cooler waters of this basin. By analyzing the Mann-kendall and Sen estimator test, there was a trend of increasing the seasonal reference evapotranspiration (fall, winter and spring) on the order of 0.11 mm per decade and that from the years of 1990, 1996 and 2001 became statistically significant to the localities of Cruzeiro do Sul Tarauacá and Rio Branco, respectively. For trend analysis of meteorological parameters was observed positive trend in the 5% level of significance, for average temperature, minimum temperature and solar radiation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An important problem faced by the oil industry is to distribute multiple oil products through pipelines. Distribution is done in a network composed of refineries (source nodes), storage parks (intermediate nodes), and terminals (demand nodes) interconnected by a set of pipelines transporting oil and derivatives between adjacent areas. Constraints related to storage limits, delivery time, sources availability, sending and receiving limits, among others, must be satisfied. Some researchers deal with this problem under a discrete viewpoint in which the flow in the network is seen as batches sending. Usually, there is no separation device between batches of different products and the losses due to interfaces may be significant. Minimizing delivery time is a typical objective adopted by engineers when scheduling products sending in pipeline networks. However, costs incurred due to losses in interfaces cannot be disregarded. The cost also depends on pumping expenses, which are mostly due to the electricity cost. Since industrial electricity tariff varies over the day, pumping at different time periods have different cost. This work presents an experimental investigation of computational methods designed to deal with the problem of distributing oil derivatives in networks considering three minimization objectives simultaneously: delivery time, losses due to interfaces and electricity cost. The problem is NP-hard and is addressed with hybrid evolutionary algorithms. Hybridizations are mainly focused on Transgenetic Algorithms and classical multi-objective evolutionary algorithm architectures such as MOEA/D, NSGA2 and SPEA2. Three architectures named MOTA/D, NSTA and SPETA are applied to the problem. An experimental study compares the algorithms on thirty test cases. To analyse the results obtained with the algorithms Pareto-compliant quality indicators are used and the significance of the results evaluated with non-parametric statistical tests.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The variability / climate change has generated great concern worldwide, is one of the major issues as global warming, which can is affecting the availability of water resources in irrigated perimeters. In the semiarid region of Northeastern Brazil it is known that there is a predominance of drought, but it is not enough known about trends in climate series of joint water loss by evaporation and transpiration (evapotranspiration). Therefore, this study aimed to analyze whether there is increase and / or decrease evidence in the regime of reference evapotranspiration (ETo), for the monthly, annual and interdecadal scales in irrigated polo towns of Juazeiro, BA (9 ° 24'S, 40 ° 26'W and 375,5m) and Petrolina, PE (09 ° 09'S, 40 ° 22'W and 376m), which is the main analysis objective. The daily meteorological data were provided by EMBRAPA Semiárido for the period from 01.01.1976 to 31.12.2014, estimated the daily ETo using the standard method of Penman-Monteith (EToPM) parameterized by Smith (1991). Other methods of more simplified estimatives were calculated and compared to EToPM, as the ones following: Solar Radiation (EToRS), Linacre (EToL), Hargreaves and Samani (EToHS) and the method of Class A pan (EToTCA). The main statistical analysis were non-parametric tests of homogeneity (Run), trend (Mann-kendall), magnitude of the trend (Sen) and early trend detection (Mann-Whitney). The statistical significance adopted was 5 and / or 1%. The Analysis of Variance - ANOVA was used to detect if there is a significant difference in mean interdecadal mean. For comparison between the methods of ETo, it were used the correlation test (r), the Student t test and Tukey levels of 5% significance. Finally, statistics Willmott et al. (1985) statistics was used to evaluate the concordance index and performance of simplified methods compared to the standard method. It obtained as main results that there was a decrease in the time series of EToPM in irrigated areas of Juazeiro, BA and Petrolina, PE, significant respectively at 1 and 5%, with an annual magnitude of -14.5 mm (Juazeiro) and -7.7 mm (Petrolina) and early trend in 1996. The methods which had better for better agreement with EToPM were EToRS with very good performance, in both locations, followed by the method of EToL with good performance (Juazeiro) and median (Petrolina). EToHS had the worst performance (bad) for both locations. It is suggested that this decrease of EToPM can be associated with the increase in irrigated agricultural areas and the construction of Sobradinho lake upstream of the perimeters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis stems from the project with real-time environmental monitoring company EMSAT Corporation. They were looking for methods to automatically ag spikes and other anomalies in their environmental sensor data streams. The problem presents several challenges: near real-time anomaly detection, absence of labeled data and time-changing data streams. Here, we address this problem using both a statistical parametric approach as well as a non-parametric approach like Kernel Density Estimation (KDE). The main contribution of this thesis is extending the KDE to work more effectively for evolving data streams, particularly in presence of concept drift. To address that, we have developed a framework for integrating Adaptive Windowing (ADWIN) change detection algorithm with KDE. We have tested this approach on several real world data sets and received positive feedback from our industry collaborator. Some results appearing in this thesis have been presented at ECML PKDD 2015 Doctoral Consortium.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper proposes an allocation Malmquist index which is inspired by the work on the non-parametric cost Malmquist index. We first show that how to decompose the cost Malmquist index into the input-oriented Malmquist index and the allocation Malmquist index. An application in corporate management of the China securities industry with the panel data set of 40 securities companies during the period 2005–2011 shows the practicality of the propose model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective. The prevalence of smoking in Aboriginal Canadians is higher than non-Aboriginal Canadians, a behavior that also tends to alter dietary patterns. Compared with the general Canadian population, maternal smoking rates are almost twice as high. The aim of this study was to compare dietary adequacy of Inuvialuit women of childbearing age comparing smokers versus non-smokers. Research methods & procedures. A cross-sectional study, where participants completed a culturally specific quantitative food frequency questionnaire. Non-parametric analysis was used to compare mean nutrient intake, dietary inadequacy and differences in nutrient density among smokers and non-smokers. Multiple logistic regression analyses were performed for key nutrients inadequacy and smoking status. Data was collected from three communities in the Beaufort Delta region of the Northwest Territories, Canada from randomly selected Inuvialuit women of childbearing age (19-44 years). Results: Of 92 participants, 75% reported being smokers. There were no significant differences in age, BMI, marital status, education, number of people in household working and/or number of self employed, and physical activity between smokers and non-smokers. Non-parametric analysis showed no differences in nutrient intake between smokers and non-smokers. Logistic regression however revealed there was a positive association between smoking and inadequacies of vitamin C (OR = 2.91, 95% CI, 1.17-5.25), iron (OR = 3.16, 95% CI, 1.27-5.90), and zinc (OR = 2.78, 95% CI, 1.12-4.94). A high percentage of women (>60%), regardless of smoking status, did not meet the dietary recommendations for fiber, vitamin D, E and potassium. Conclusions: This study provides evidence of inadequate dietary intake among Inuvialuit of childbearing age regardless of smoking behavior.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.

Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.

One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.

Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.

In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.

Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.

The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.

Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study aims at exploring the potential impact of forest protection intervention on rural households’ private fuel tree planting in Chiro district of eastern Ethiopia. The study results revealed a robust and significant positive impact of the intervention on farmers’ decisions to produce private household energy by growing fuel trees on their farm. As participation in private fuel tree planting is not random, the study confronts a methodological issue in investigating the causal effect of forest protection intervention on rural farm households’ private fuel tree planting through non-parametric propensity score matching (PSM) method. The protection intervention on average has increased fuel tree planting by 503 (580.6%) compared to open access areas and indirectly contributed to slowing down the loss of biodiversity in the area. Land cover/use is a dynamic phenomenon that changes with time and space due to anthropogenic pressure and development. Forest cover and land use changes in Chiro District, Ethiopia over a period of 40 years was studied using remotely sensed data. Multi temporal satellite data of Landsat was used to map and monitor forest cover and land use changes occurred during three point of time of 1972,1986 and 2012. A pixel base supervised image classification was used to map land use land cover classes for maps of both time set. The result of change detection analysis revealed that the area has shown a remarkable land cover/land use changes in general and forest cover change in particular. Specifically, the dense forest cover land declined from 235 ha in 1972 to 51 ha in 1986. However, government interventions in forest protection in 1989 have slowed down the drastic change of dense forest cover loss around the protected area through reclaiming 1,300 hectares of deforested land through reforestation program up to 2012.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present an extensive photometric catalog for 548 CALIFA galaxies observed as of the summer of 2015. CALIFA is currently lacking photometry matching the scale and diversity of its spectroscopy; this work is intended to meet all photometric needs for CALIFA galaxies while also identifying best photometric practices for upcoming integral field spectroscopy surveys such as SAMI and MaNGA. This catalog comprises gri surface brightness profiles derived from Sloan Digital Sky Survey (SDSS) imaging, a variety of non-parametric quantities extracted from these pro files, and parametric models fitted to the i-band pro files (1D) and original galaxy images (2D). To compliment our photometric analysis, we contrast the relative performance of our 1D and 2D modelling approaches. The ability of each measurement to characterize the global properties of galaxies is quantitatively assessed, in the context of constructing the tightest scaling relations. Where possible, we compare our photometry with existing photometrically or spectroscopically obtained measurements from the literature. Close agreement is found with Walcher et al. (2014), the current source of basic photometry and classifications of CALIFA galaxies, while comparisons with spectroscopically derived quantities reveals the effect of CALIFA's limited field of view compared to broadband imaging surveys such as the SDSS. The colour-magnitude diagram, star formation main sequence, and Tully-Fisher relation of CALIFA galaxies are studied, to give a small example of the investigations possible with this rich catalog. We conclude with a discussion of points of concern for ongoing integral field spectroscopy surveys and directions for future expansion and exploitation of this work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Identifying 20th-century periodic coastal surge variation is strategic for the 21st-century coastal surge estimates, as surge periodicities may amplify/reduce future MSL enhanced surge forecasts. Extreme coastal surge data from Belfast Harbour (UK) tide gauges are available for 1901–2010 and provide the potential for decadal-plus periodic coastal surge analysis. Annual extreme surge-elevation distributions (sampled every 10-min) are analysed using PCA and cluster analysis to decompose variation within- and between-years to assess similarity of years in terms of Surge Climate Types, and to establish significance of any transitions in Type occurrence over time using non-parametric Markov analysis. Annual extreme surge variation is shown to be periodically organised across the 20th century. Extreme surge magnitude and distribution show a number of significant cyclonic induced multi-annual (2, 3, 5 & 6 years) cycles, as well as dominant multi-decadal (15–25 years) cycles of variation superimposed on an 80 year fluctuation in atmospheric–oceanic variation across the North Atlantic (relative to NAO/AMO interaction). The top 30 extreme surge events show some relationship with NAO per se, given that 80% are associated with westerly dominant atmospheric flows (+ NAO), but there are 20% of the events associated with blocking air massess (− NAO). Although 20% of the top 30 ranked positive surges occurred within the last twenty years, there is no unequivocal evidence of recent acceleration in extreme surge magnitude related to other than the scale of natural periodic variation.