938 resultados para alta risoluzione Trentino Alto Adige data-set climatologia temperatura giornaliera orografia complessa


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A two-stage methodology is developed to obtain future projections of daily relative humidity in a river basin for climate change scenarios. In the first stage, Support Vector Machine (SVM) models are developed to downscale nine sets of predictor variables (large-scale atmospheric variables) for Intergovernmental Panel on Climate Change Special Report on Emissions Scenarios (SRES) (A1B, A2, B1, and COMMIT) to R (H) in a river basin at monthly scale. Uncertainty in the future projections of R (H) is studied for combinations of SRES scenarios, and predictors selected. Subsequently, in the second stage, the monthly sequences of R (H) are disaggregated to daily scale using k-nearest neighbor method. The effectiveness of the developed methodology is demonstrated through application to the catchment of Malaprabha reservoir in India. For downscaling, the probable predictor variables are extracted from the (1) National Centers for Environmental Prediction reanalysis data set for the period 1978-2000 and (2) simulations of the third-generation Canadian Coupled Global Climate Model for the period 1978-2100. The performance of the downscaling and disaggregation models is evaluated by split sample validation. Results show that among the SVM models, the model developed using predictors pertaining to only land location performed better. The R (H) is projected to increase in the future for A1B and A2 scenarios, while no trend is discerned for B1 and COMMIT.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A careful comparison of the experimental results reported in the literature reveals different variations of the melting temperature even for the same materials. Though there are different theoretical models, thermodynamic model has been extensively used to understand different variations of size-dependent melting of nanoparticles. There are different hypotheses such as homogeneous melting (HMH), liquid nucleation and growth (LNG) and liquid skin melting (LSM) to resolve different variations of melting temperature as reported in the literature. HMH and LNG account for the linear variation where as LSM is applied to understand the nonlinear behaviour in the plot of melting temperature against reciprocal of particle size. However, a bird's eye view reveals that either HMH or LSM has been extensively used by experimentalists. It has also been observed that not a single hypothesis can explain the size-dependent melting in the complete range. Therefore we describe an approach which can predict the plausible hypothesis for a given data set of the size-dependent melting temperature. A variety of data have been analyzed to ascertain the hypothesis and to test the approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The rapid disruption of tropical forests probably imperils global biodiversity more than any other contemporary phenomenon(1-3). With deforestation advancing quickly, protected areas are increasingly becoming final refuges for threatened species and natural ecosystem processes. However, many protected areas in the tropics are themselves vulnerable to human encroachment and other environmental stresses(4-9). As pressures mount, it is vital to know whether existing reserves can sustain their biodiversity. A critical constraint in addressing this question has been that data describing a broad array of biodiversity groups have been unavailable for a sufficiently large and representative sample of reserves. Here we present a uniquely comprehensive data set on changes over the past 20 to 30 years in 31 functional groups of species and 21 potential drivers of environmental change, for 60 protected areas stratified across the world's major tropical regions. Our analysis reveals great variation in reserve `health': about half of all reserves have been effective or performed passably, but the rest are experiencing an erosion of biodiversity that is often alarmingly widespread taxonomically and functionally. Habitat disruption, hunting and forest-product exploitation were the strongest predictors of declining reserve health. Crucially, environmental changes immediately outside reserves seemed nearly as important as those inside in determining their ecological fate, with changes inside reserves strongly mirroring those occurring around them. These findings suggest that tropical protected areas are often intimately linked ecologically to their surrounding habitats, and that a failure to stem broad-scale loss and degradation of such habitats could sharply increase the likelihood of serious biodiversity declines.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Estimation of soil parameters by inverse modeling using observations on either surface soil moisture or crop variables has been successfully attempted in many studies, but difficulties to estimate root zone properties arise when heterogeneous layered soils are considered. The objective of this study was to explore the potential of combining observations on surface soil moisture and crop variables - leaf area index (LAI) and above-ground biomass for estimating soil parameters (water holding capacity and soil depth) in a two-layered soil system using inversion of the crop model STICS. This was performed using GLUE method on a synthetic data set on varying soil types and on a data set from a field experiment carried out in two maize plots in South India. The main results were (i) combination of surface soil moisture and above-ground biomass provided consistently good estimates with small uncertainity of soil properties for the two soil layers, for a wide range of soil paramater values, both in the synthetic and the field experiment, (ii) above-ground biomass was found to give relatively better estimates and lower uncertainty than LAI when combined with surface soil moisture, especially for estimation of soil depth, (iii) surface soil moisture data, either alone or combined with crop variables, provided a very good estimate of the water holding capacity of the upper soil layer with very small uncertainty whereas using the surface soil moisture alone gave very poor estimates of the soil properties of the deeper layer, and (iv) using crop variables alone (else above-ground biomass or LAI) provided reasonable estimates of the deeper layer properties depending on the soil type but provided poor estimates of the first layer properties. The robustness of combining observations of the surface soil moisture and the above-ground biomass for estimating two layer soil properties, which was demonstrated using both synthetic and field experiments in this study, needs now to be tested for a broader range of climatic conditions and crop types, to assess its potential for spatial applications. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Competition theory predicts that local communities should consist of species that are more dissimilar than expected by chance. We find a strikingly different pattern in a multicontinent data set (55 presence-absence matrices from 24 locations) on the composition of mixed-species bird flocks, which are important sub-units of local bird communities the world over. By using null models and randomization tests followed by meta-analysis, we find the association strengths of species in flocks to be strongly related to similarity in body size and foraging behavior and higher for congeneric compared with noncongeneric species pairs. Given the local spatial scales of our individual analyses, differences in the habitat preferences of species are unlikely to have caused these association patterns; the patterns observed are most likely the outcome of species interactions. Extending group-living and social-information-use theory to a heterospecific context, we discuss potential behavioral mechanisms that lead to positive interactions among similar species in flocks, as well as ways in which competition costs are reduced. Our findings highlight the need to consider positive interactions along with competition when seeking to explain community assembly.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper discusses the use of Jason-2 radar altimeter measurements to estimate the Ganga-Brahmaputra surface freshwater flux into the Bay of Bengal for the period mid-2008 to December 2011. A previous estimate was generated for 1993-2008 using TOPEX-Poseidon, ERS-2 and ENVISAT, and is now extended using Jason-2. To take full advantages of the new availability of in situ rating curves, the processing scheme is adapted and the adjustments of the methodology are discussed here. First, using a large sample of in situ river height measurements, we estimate the standard error of Jason-2-derived water levels over the Ganga and the Brahmaputra to be respectively of 0.28 m and 0.19 m, or less than similar to 4% of the annual peak-to-peak variations of these two rivers. Using the in situ rating curves between water levels and river discharges, we show that Jason-2 accurately infers Ganga and Brahmaputra instantaneous discharges for 2008-2011 with mean errors ranging from similar to 2180 m(3)/s (6.5%) over the Brahmaputra to similar to 1458 m(3)/s (13%) over the Ganga. The combined Ganga-Brahmaputra monthly discharges meet the requirements of acceptable accuracy (15-20%) with a mean error of similar to 16% for 2009-2011 and similar to 17% for 1993-2011. The Ganga-Brahmaputra monthly discharge at the river mouths is then presented, showing a marked interannual variability with a standard deviation of similar to 12500 m(3)/s, much larger than the data set uncertainty. Finally, using in situ sea surface salinity observations, we illustrate the possible impact of extreme continental freshwater discharge event on the northern Bay of Bengal as observed in 2008.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the present study, variable temperature FT-IR spectroscopic investigations were used to characterize the spectral changes in oleic acid during heating oleic acid in the temperature range from -30 degrees;C to 22 degrees C. In order to extract more information about the spectral variations taking place during the phase transition process, 2D correlation spectroscopy (2DCOS) was employed for the stretching (C?O) and rocking (CH2) band of oleic acid. However, the interpretation of these spectral variations in the FT-IR spectra is not straightforward, because the absorption bands are heavily overlapped and change due to two processes: recrystallization of the ?-phase and melting of the oleic acid. Furthermore, the solid phase transition from the ?- to the a-phase was also observed between -4 degrees C and -2 degrees C. Thus, for a more detailed 2DCOS analysis, we have split up the spectral data set in the subsets recorded between -30 degrees C to -16 degrees C, -16 degrees C to 10 degrees C, and 10 degrees C to 22 degrees C. In the corresponding synchronous and asynchronous 2D correlation plots, absorption bands that are characteristic of the crystalline and amorphous regions of oleic acid were separated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Query focused summarization is the task of producing a compressed text of original set of documents based on a query. Documents can be viewed as graph with sentences as nodes and edges can be added based on sentence similarity. Graph based ranking algorithms which use 'Biased random surfer model' like topic-sensitive LexRank have been successfully applied to query focused summarization. In these algorithms, random walk will be biased towards the sentences which contain query relevant words. Specifically, it is assumed that random surfer knows the query relevance score of the sentence to where he jumps. However, neighbourhood information of the sentence to where he jumps is completely ignored. In this paper, we propose look-ahead version of topic-sensitive LexRank. We assume that random surfer not only knows the query relevance of the sentence to where he jumps but he can also look N-step ahead from that sentence to find query relevance scores of future set of sentences. Using this look ahead information, we figure out the sentences which are indirectly related to the query by looking at number of hops to reach a sentence which has query relevant words. Then we make the random walk biased towards even to the indirect query relevant sentences along with the sentences which have query relevant words. Experimental results show 20.2% increase in ROUGE-2 score compared to topic-sensitive LexRank on DUC 2007 data set. Further, our system outperforms best systems in DUC 2006 and results are comparable to state of the art systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There are many applications such as software for processing customer records in telecom, patient records in hospitals, email processing software accessing a single email in a mailbox etc. which require to access a single record in a database consisting of millions of records. A basic feature of these applications is that they need to access data sets which are very large but simple. Cloud computing provides computing requirements for these kinds of new generation of applications involving very large data sets which cannot possibly be handled efficiently using traditional computing infrastructure. In this paper, we describe storage services provided by three well-known cloud service providers and give a comparison of their features with a view to characterize storage requirements of very large data sets as examples and we hope that it would act as a catalyst for the design of storage services for very large data set requirements in future. We also give a brief overview of other kinds of storage that have come up in the recent past for cloud computing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents an improved hierarchical clustering algorithm for land cover mapping problem using quasi-random distribution. Initially, Niche Particle Swarm Optimization (NPSO) with pseudo/quasi-random distribution is used for splitting the data into number of cluster centers by satisfying Bayesian Information Criteria (BIC). Themain objective is to search and locate the best possible number of cluster and its centers. NPSO which highly depends on the initial distribution of particles in search space is not been exploited to its full potential. In this study, we have compared more uniformly distributed quasi-random with pseudo-random distribution with NPSO for splitting data set. Here to generate quasi-random distribution, Faure method has been used. Performance of previously proposed methods namely K-means, Mean Shift Clustering (MSC) and NPSO with pseudo-random is compared with the proposed approach - NPSO with quasi distribution(Faure). These algorithms are used on synthetic data set and multi-spectral satellite image (Landsat 7 thematic mapper). From the result obtained we conclude that use of quasi-random sequence with NPSO for hierarchical clustering algorithm results in a more accurate data classification.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we explore noise-tolerant learning of classifiers. We formulate the problem as follows. We assume that there is an unobservable training set that is noise free. The actual training set given to the learning algorithm is obtained from this ideal data set by corrupting the class label of each example. The probability that the class label of an example is corrupted is a function of the feature vector of the example. This would account for most kinds of noisy data one encounters in practice. We say that a learning method is noise tolerant if the classifiers learnt with noise-free data and with noisy data, both have the same classification accuracy on the noise-free data. In this paper, we analyze the noise-tolerance properties of risk minimization (under different loss functions). We show that risk minimization under 0-1 loss function has impressive noise-tolerance properties and that under squared error loss is tolerant only to uniform noise; risk minimization under other loss functions is not noise tolerant. We conclude this paper with some discussion on the implications of these theoretical results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Clustering has been the most popular method for data exploration. Clustering is partitioning the data set into sub-partitions based on some measures say the distance measure, each partition has its own significant information. There are a number of algorithms explored for this purpose, one such algorithm is the Particle Swarm Optimization(PSO) which is a population based heuristic search technique derived from swarm intelligence. In this paper we present an improved version of the Particle Swarm Optimization where, each feature of the data set is given significance accordingly by adding some random weights, which also minimizes the distortions in the dataset if any. The performance of the above proposed algorithm is evaluated using some benchmark datasets from Machine Learning Repository. The experimental results shows that our proposed methodology performs significantly better than the previously performed experiments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Residue depth accurately measures burial and parameterizes local protein environment. Depth is the distance of any atom/residue to the closest bulk water. We consider the non-bulk waters to occupy cavities, whose volumes are determined using a Voronoi procedure. Our estimation of cavity sizes is statistically superior to estimates made by CASTp and VOIDOO, and on par with McVol over a data set of 40 cavities. Our calculated cavity volumes correlated best with the experimentally determined destabilization of 34 mutants from five proteins. Some of the cavities identified are capable of binding small molecule ligands. In this study, we have enhanced our depth-based predictions of binding sites by including evolutionary information. We have demonstrated that on a database (LigASite) of similar to 200 proteins, we perform on par with ConCavity and better than MetaPocket 2.0. Our predictions, while less sensitive, are more specific and precise. Finally, we use depth (and other features) to predict pK(a)s of GLU, ASP, LYS and HIS residues. Our results produce an average error of just <1 pH unit over 60 predictions. Our simple empirical method is statistically on par with two and superior to three other methods while inferior to only one. The DEPTH server (http://mspc.bii.a-star.edu.sg/depth/) is an ideal tool for rapid yet accurate structural analyses of protein structures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider the problem of developing privacy-preserving machine learning algorithms in a dis-tributed multiparty setting. Here different parties own different parts of a data set, and the goal is to learn a classifier from the entire data set with-out any party revealing any information about the individual data points it owns. Pathak et al [7]recently proposed a solution to this problem in which each party learns a local classifier from its own data, and a third party then aggregates these classifiers in a privacy-preserving manner using a cryptographic scheme. The generaliza-tion performance of their algorithm is sensitive to the number of parties and the relative frac-tions of data owned by the different parties. In this paper, we describe a new differentially pri-vate algorithm for the multiparty setting that uses a stochastic gradient descent based procedure to directly optimize the overall multiparty ob-jective rather than combining classifiers learned from optimizing local objectives. The algorithm achieves a slightly weaker form of differential privacy than that of [7], but provides improved generalization guarantees that do not depend on the number of parties or the relative sizes of the individual data sets. Experimental results corrob-orate our theoretical findings.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data clustering is a common technique for statistical data analysis, which is used in many fields, including machine learning and data mining. Clustering is grouping of a data set or more precisely, the partitioning of a data set into subsets (clusters), so that the data in each subset (ideally) share some common trait according to some defined distance measure. In this paper we present the genetically improved version of particle swarm optimization algorithm which is a population based heuristic search technique derived from the analysis of the particle swarm intelligence and the concepts of genetic algorithms (GA). The algorithm combines the concepts of PSO such as velocity and position update rules together with the concepts of GA such as selection, crossover and mutation. The performance of the above proposed algorithm is evaluated using some benchmark datasets from Machine Learning Repository. The performance of our method is better than k-means and PSO algorithm.