810 resultados para Extreme values
Resumo:
In this paper we discuss a fast Bayesian extension to kriging algorithms which has been used successfully for fast, automatic mapping in emergency conditions in the Spatial Interpolation Comparison 2004 (SIC2004) exercise. The application of kriging to automatic mapping raises several issues such as robustness, scalability, speed and parameter estimation. Various ad-hoc solutions have been proposed and used extensively but they lack a sound theoretical basis. In this paper we show how observations can be projected onto a representative subset of the data, without losing significant information. This allows the complexity of the algorithm to grow as O(n m 2), where n is the total number of observations and m is the size of the subset of the observations retained for prediction. The main contribution of this paper is to further extend this projective method through the application of space-limited covariance functions, which can be used as an alternative to the commonly used covariance models. In many real world applications the correlation between observations essentially vanishes beyond a certain separation distance. Thus it makes sense to use a covariance model that encompasses this belief since this leads to sparse covariance matrices for which optimised sparse matrix techniques can be used. In the presence of extreme values we show that space-limited covariance functions offer an additional benefit, they maintain the smoothness locally but at the same time lead to a more robust, and compact, global model. We show the performance of this technique coupled with the sparse extension to the kriging algorithm on synthetic data and outline a number of computational benefits such an approach brings. To test the relevance to automatic mapping we apply the method to the data used in a recent comparison of interpolation techniques (SIC2004) to map the levels of background ambient gamma radiation. © Springer-Verlag 2007.
Resumo:
INTAMAP is a web processing service for the automatic interpolation of measured point data. Requirements were (i) using open standards for spatial data such as developed in the context of the open geospatial consortium (OGC), (ii) using a suitable environment for statistical modelling and computation, and (iii) producing an open source solution. The system couples the 52-North web processing service, accepting data in the form of an observations and measurements (O&M) document with a computing back-end realized in the R statistical environment. The probability distribution of interpolation errors is encoded with UncertML, a new markup language to encode uncertain data. Automatic interpolation needs to be useful for a wide range of applications and the algorithms have been designed to cope with anisotropies and extreme values. In the light of the INTAMAP experience, we discuss the lessons learnt.
Resumo:
INTAMAP is a Web Processing Service for the automatic spatial interpolation of measured point data. Requirements were (i) using open standards for spatial data such as developed in the context of the Open Geospatial Consortium (OGC), (ii) using a suitable environment for statistical modelling and computation, and (iii) producing an integrated, open source solution. The system couples an open-source Web Processing Service (developed by 52°North), accepting data in the form of standardised XML documents (conforming to the OGC Observations and Measurements standard) with a computing back-end realised in the R statistical environment. The probability distribution of interpolation errors is encoded with UncertML, a markup language designed to encode uncertain data. Automatic interpolation needs to be useful for a wide range of applications and the algorithms have been designed to cope with anisotropy, extreme values, and data with known error distributions. Besides a fully automatic mode, the system can be used with different levels of user control over the interpolation process.
Resumo:
2000 Mathematics Subject Classification: 60G70, 60F05.
Resumo:
Bayesian adaptive methods have been extensively used in psychophysics to estimate the point at which performance on a task attains arbitrary percentage levels, although the statistical properties of these estimators have never been assessed. We used simulation techniques to determine the small-sample properties of Bayesian estimators of arbitrary performance points, specifically addressing the issues of bias and precision as a function of the target percentage level. The study covered three major types of psychophysical task (yes-no detection, 2AFC discrimination and 2AFC detection) and explored the entire range of target performance levels allowed for by each task. Other factors included in the study were the form and parameters of the actual psychometric function Psi, the form and parameters of the model function M assumed in the Bayesian method, and the location of Psi within the parameter space. Our results indicate that Bayesian adaptive methods render unbiased estimators of any arbitrary point on psi only when M=Psi, and otherwise they yield bias whose magnitude can be considerable as the target level moves away from the midpoint of the range of Psi. The standard error of the estimator also increases as the target level approaches extreme values whether or not M=Psi. Contrary to widespread belief, neither the performance level at which bias is null nor that at which standard error is minimal can be predicted by the sweat factor. A closed-form expression nevertheless gives a reasonable fit to data describing the dependence of standard error on number of trials and target level, which allows determination of the number of trials that must be administered to obtain estimates with prescribed precision.
Resumo:
More than 25 years of mass budget measurements of Hintereisferner allow a correlation with simultaneous observations of the climatological station Vent (1900 m a. s. l.). By this means it is possible to estimate the mass budget of earlier periods. Extreme values in the period between 1934/35 and 1951/52 span a range nearly twice as wide as in the pcriod thereafter.
Resumo:
Clustering algorithms, pattern mining techniques and associated quality metrics emerged as reliable methods for modeling learners’ performance, comprehension and interaction in given educational scenarios. The specificity of available data such as missing values, extreme values or outliers, creates a challenge to extract significant user models from an educational perspective. In this paper we introduce a pattern detection mechanism with-in our data analytics tool based on k-means clustering and on SSE, silhouette, Dunn index and Xi-Beni index quality metrics. Experiments performed on a dataset obtained from our online e-learning platform show that the extracted interaction patterns were representative in classifying learners. Furthermore, the performed monitoring activities created a strong basis for generating automatic feedback to learners in terms of their course participation, while relying on their previous performance. In addition, our analysis introduces automatic triggers that highlight learners who will potentially fail the course, enabling tutors to take timely actions.
Resumo:
Full Text / Article complet
Resumo:
This paper presents the determination of a mean solar radiation year and of a typical meteorological year for the region of Funchal in the Madeira Island, Portugal. The data set includes hourly mean and extreme values for air temperature, relative humidity and wind speed and hourly mean values for solar global and diffuse radiation for the period 2004-2014, with maximum data coverage of 99.7%. The determination of the mean solar radiation year consisted, in a first step, in the average of all values for each pair hour/day and, in a second step, in the application of a five days centred moving average of hourly values. The determination of the typical meteorological year was based on Finkelstein-Schafer statistics, which allows to obtain a complete year of real measurements through the selection and combination of typical months, preserving the long term averages while still allowing the analysis of short term events. The typical meteorological year validation was carried out through the comparison of the monthly averages for the typical year with the long term monthly averages. The values obtained were very close, so that the typical meteorological year can accurately represent the long term data series. The typical meteorological year can be used in the simulation of renewable energy systems, namely solar energy systems, and for predicting the energy performance of buildings.
Resumo:
Full Text / Article complet
Resumo:
The increasing number of extreme rainfall events, combined with the high population density and the imperviousness of the land surface, makes urban areas particularly vulnerable to pluvial flooding. In order to design and manage cities to be able to deal with this issue, the reconstruction of weather phenomena is essential. Among the most interesting data sources which show great potential are the observational networks of private sensors managed by citizens (crowdsourcing). The number of these personal weather stations is consistently increasing, and the spatial distribution roughly follows population density. Precisely for this reason, they perfectly suit this detailed study on the modelling of pluvial flood in urban environments. The uncertainty associated with these measurements of precipitation is still a matter of research. In order to characterise the accuracy and precision of the crowdsourced data, we carried out exploratory data analyses. A comparison between Netatmo hourly precipitation amounts and observations of the same quantity from weather stations managed by national weather services is presented. The crowdsourced stations have very good skills in rain detection but tend to underestimate the reference value. In detail, the accuracy and precision of crowd- sourced data change as precipitation increases, improving the spread going to the extreme values. Then, the ability of this kind of observation to improve the prediction of pluvial flooding is tested. To this aim, the simplified raster-based inundation model incorporated in the Saferplaces web platform is used for simulating pluvial flooding. Different precipitation fields have been produced and tested as input in the model. Two different case studies are analysed over the most densely populated Norwegian city: Oslo. The crowdsourced weather station observations, bias-corrected (i.e. increased by 25%), showed very good skills in detecting flooded areas.
Resumo:
In the traditional TOPSIS, the ideal solutions are assumed to be located at the endpoints of the data interval. However, not all performance attributes possess ideal values at the endpoints. We termed performance attributes that have ideal values at extreme points as Type-1 attributes. Type-2 attributes however possess ideal values somewhere within the data interval instead of being at the extreme end points. This provides a preference ranking problem when all attributes are computed and assumed to be of the Type-1 nature. To overcome this issue, we propose a new Fuzzy DEA method for computing the ideal values and distance function of Type-2 attributes in a TOPSIS methodology. Our method allows Type-1 and Type-2 attributes to be included in an evaluation system without compromising the ranking quality. The efficacy of the proposed model is illustrated with a vendor evaluation case for a high-tech investment decision making exercise. A comparison analysis with the traditional TOPSIS is also presented. © 2012 Springer Science+Business Media B.V.
Resumo:
Quartz veins ranging in size from less than 50 cm length and 5 cm width to greater than 10 m in length and 5 m in width are found throughout the Central Swiss Alps. In some cases, the veins are completely filled with milky quartz, while in others, sometimes spectacular void-filling quartz crystals are found. The style of vein filling and size is controlled by host rock composition and deformation history. Temperatures of vein formation, estimated using stable isotope thermometry and mineral equilibria, cover a range of 450 degrees C down to 150 degrees C. Vein formation started at 18 to 20 Ma and continued for over 10 My. The oxygen isotope values of quartz veins range from 10 to 20 permil, and in almost all cases are equal to those of the hosting lithology. The strongly rock-buffered veins imply a low fluid/rock ratio and minimal fluid flow. In order to explain massive, nearly morromineralic quartz formation without exceptionally large fluid fluxes, a mechanism of differential pressure and silica diffusion, combined with pressure solution, is proposed for early vein formation. Fluid inclusions and hydrous minerals in late-formed veins have extremely low delta D values, consistent with meteoric water infiltration. The change from rock-buffered, static fluid to infiltration from above can be explained in terms of changes in the large-scale deformation style occurring between 20 and 15 Ma. The rapid cooling of the Central Alps identified in previous studies may be explained in part, by infiltration of cold meteoric waters along fracture systems down to depths of 10 km or more. An average water flux of 0.15 cm 3 cm(-2)yr(-1) entering the rock and reemerging heated by 40 degrees C is sufficient to cool rock at 10 km depth by 100 degrees C in 5 million years. The very negative delta D values of < -130 permil for the late stage fluids are well below the annual average values measured in meteoric water in the region today. The low fossil delta D values indicate that the Central Alps were at a higher elevation in the Neogene. Such a conclusion is supported by an earlier work, where a paleoaltitude of 5000 meters was proposed on the basis of large erratic boulders found at low elevations far from their origin.
Resumo:
Gumbel analyses were carried out on rainfall time-series at 151 locations in Switzerland for 4 different periods of 30 years in order to estimate daily extreme precipitation for a return period of 100 years. Those estimations were compared with maximal daily values measured during the last 100 years (1911-2010) to test the efficiency of these analyses. This comparison shows that these analyses provide good results for 50 to 60% locations in this country from rainfall time-series 1961-1990 and 1980-2010. On the other hand, daily precipitation with a return period of 100 years is underestimated at most locations from time-series 1931-1960 and especially 1911-1940. Such underestimation results from the increase of maximal daily precipitation recorded from 1911 to 2010 at 90% locations in Switzerland.
Resumo:
Extensional detachment systems separate hot footwalls from cool hanging walls, but the degree to which this thermal gradient is the product of ductile or brittle deformation or a preserved original transient geotherm is unclear. Oxygen isotope thermometry using recrystallized quartz-muscovite pairs indicates a smooth thermal gradient (140 degrees C/100 m) across the gently dipping, quartzite-dominated detachment zone that bounds the Raft River core complex in northwest Utah (United States). Hydrogen isotope values of muscovite (delta D-Ms similar to-100 parts per thousand) and fluid inclusions in quartz (delta D-Fluid similar to-85 parts per thousand) indicate the presence of meteoric fluids during detachment dynamics. Recrystallized grain-shape fabrics and quartz c-axis fabric patterns reveal a large component of coaxial strain (pure shear), consistent with thinning of the detachment section. Therefore, the high thermal gradient preserved in the Raft River detachment reflects the transient geotherm that developed owing to shearing, thinning, and the potentially prominent role of convective flow of surface fluids.