189 resultados para Köppen climate classification
Resumo:
General circulation models (GCMs) are routinely used to simulate future climatic conditions. However, rainfall outputs from GCMs are highly uncertain in preserving temporal correlations, frequencies, and intensity distributions, which limits their direct application for downscaling and hydrological modeling studies. To address these limitations, raw outputs of GCMs or regional climate models are often bias corrected using past observations. In this paper, a methodology is presented for using a nested bias-correction approach to predict the frequencies and occurrences of severe droughts and wet conditions across India for a 48-year period (2050-2099) centered at 2075. Specifically, monthly time series of rainfall from 17 GCMs are used to draw conclusions for extreme events. An increasing trend in the frequencies of droughts and wet events is observed. The northern part of India and coastal regions show maximum increase in the frequency of wet events. Drought events are expected to increase in the west central, peninsular, and central northeast regions of India. (C) 2013 American Society of Civil Engineers.
Resumo:
This paper presents classification, representation and extraction of deformation features in sheet-metal parts. The thickness is constant for these shape features and hence these are also referred to as constant thickness features. The deformation feature is represented as a set of faces with a characteristic arrangement among the faces. Deformation of the base-sheet or forming of material creates Bends and Walls with respect to a base-sheet or a reference plane. These are referred to as Basic Deformation Features (BDFs). Compound deformation features having two or more BDFs are defined as characteristic combinations of Bends and Walls and represented as a graph called Basic Deformation Features Graph (BDFG). The graph, therefore, represents a compound deformation feature uniquely. The characteristic arrangement of the faces and type of bends belonging to the feature decide the type and nature of the deformation feature. Algorithms have been developed to extract and identify deformation features from a CAD model of sheet-metal parts. The proposed algorithm does not require folding and unfolding of the part as intermediate steps to recognize deformation features. Representations of typical features are illustrated and results of extracting these deformation features from typical sheet metal parts are presented and discussed. (C) 2013 Elsevier Ltd. All rights reserved.
Resumo:
Forest-management goals in the context of climate change are to reduce the adverse impact of climate change on biodiversity, ecosystem services and carbon stocks. For developing an effective adaptation strategy, knowledge on nature and sources of vulnerability of forests is necessary to conserve or enhance carbon sinks. However, assessing the vulnerability of forest ecosystems is a challenging task, as the mechanisms that determine vulnerability cannot be observed directly. In this article, we list the challenges in forest vulnerability assessments and propose an assessment of inherent vulnerability by using process-based indicators under the current climate. We also suggest periodic assessment of vulnerability, which is necessary to review adaptation strategies for the management of forests and forest carbon stocks.
Resumo:
This paper presents an approach to model the expected impacts of climate change on irrigation water demand in a reservoir command area. A statistical downscaling model and an evapotranspiration model are used with a general circulation model (GCM) output to predict the anticipated change in the monthly irrigation water requirement of a crop. Specifically, we quantify the likely changes in irrigation water demands at a location in the command area, as a response to the projected changes in precipitation and evapotranspiration at that location. Statistical downscaling with a canonical correlation analysis is carried out to develop the future scenarios of meteorological variables (rainfall, relative humidity (RH), wind speed (U-2), radiation, maximum (Tmax) and minimum (Tmin) temperatures) starting with simulations provided by a GCM for a specified emission scenario. The medium resolution Model for Interdisciplinary Research on Climate GCM is used with the A1B scenario, to assess the likely changes in irrigation demands for paddy, sugarcane, permanent garden and semidry crops over the command area of Bhadra reservoir, India. Results from the downscaling model suggest that the monthly rainfall is likely to increase in the reservoir command area. RH, Tmax and Tmin are also projected to increase with small changes in U-2. Consequently, the reference evapotranspiration, modeled by the Penman-Monteith equation, is predicted to increase. The irrigation requirements are assessed on monthly scale at nine selected locations encompassing the Bhadra reservoir command area. The irrigation requirements are projected to increase, in most cases, suggesting that the effect of projected increase in rainfall on the irrigation demands is offset by the effect due to projected increase/change in other meteorological variables (viz., Tmax and Tmin, solar radiation, RH and U-2). The irrigation demand assessment study carried out at a river basin will be useful for future irrigation management systems. Copyright (c) 2012 John Wiley & Sons, Ltd.
Resumo:
Myopathies are muscular diseases in which muscle fibers degenerate due to many factors such as nutrient deficiency, infection and mutations in myofibrillar etc. The objective of this study is to identify the bio-markers to distinguish various muscle mutants in Drosophila (fruit fly) using Raman Spectroscopy. Principal Components based Linear Discriminant Analysis (PC-LDA) classification model yielding >95% accuracy was developed to classify such different mutants representing various myopathies according to their physiopathology.
Resumo:
We study consistency properties of surrogate loss functions for general multiclass classification problems, defined by a general loss matrix. We extend the notion of classification calibration, which has been studied for binary and multiclass 0-1 classification problems (and for certain other specific learning problems), to the general multiclass setting, and derive necessary and sufficient conditions for a surrogate loss to be classification calibrated with respect to a loss matrix in this setting. We then introduce the notion of \emph{classification calibration dimension} of a multiclass loss matrix, which measures the smallest `size' of a prediction space for which it is possible to design a convex surrogate that is classification calibrated with respect to the loss matrix. We derive both upper and lower bounds on this quantity, and use these results to analyze various loss matrices. In particular, as one application, we provide a different route from the recent result of Duchi et al.\ (2010) for analyzing the difficulty of designing `low-dimensional' convex surrogates that are consistent with respect to pairwise subset ranking losses. We anticipate the classification calibration dimension may prove to be a useful tool in the study and design of surrogate losses for general multiclass learning problems.
Resumo:
We consider the problem of developing privacy-preserving machine learning algorithms in a dis-tributed multiparty setting. Here different parties own different parts of a data set, and the goal is to learn a classifier from the entire data set with-out any party revealing any information about the individual data points it owns. Pathak et al [7]recently proposed a solution to this problem in which each party learns a local classifier from its own data, and a third party then aggregates these classifiers in a privacy-preserving manner using a cryptographic scheme. The generaliza-tion performance of their algorithm is sensitive to the number of parties and the relative frac-tions of data owned by the different parties. In this paper, we describe a new differentially pri-vate algorithm for the multiparty setting that uses a stochastic gradient descent based procedure to directly optimize the overall multiparty ob-jective rather than combining classifiers learned from optimizing local objectives. The algorithm achieves a slightly weaker form of differential privacy than that of [7], but provides improved generalization guarantees that do not depend on the number of parties or the relative sizes of the individual data sets. Experimental results corrob-orate our theoretical findings.
Resumo:
Transductive SVM (TSVM) is a well known semi-supervised large margin learning method for binary text classification. In this paper we extend this method to multi-class and hierarchical classification problems. We point out that the determination of labels of unlabeled examples with fixed classifier weights is a linear programming problem. We devise an efficient technique for solving it. The method is applicable to general loss functions. We demonstrate the value of the new method using large margin loss on a number of multi-class and hierarchical classification datasets. For maxent loss we show empirically that our method is better than expectation regularization/constraint and posterior regularization methods, and competitive with the version of entropy regularization method which uses label constraints.
Resumo:
Myopathies are muscular diseases in which muscle fibers degenerate due to many factors such as nutrient deficiency, infection and mutations in myofibrillar etc. The objective of this study is to identify the bio-markers to distinguish various muscle mutants in Drosophila (fruit fly) using Raman Spectroscopy. Principal Components based Linear Discriminant Analysis (PC-LDA) classification model yielding >95% accuracy was developed to classify such different mutants representing various myopathies according to their physiopathology.
Resumo:
In this paper, we have proposed a simple and effective approach to classify H.264 compressed videos, by capturing orientation information from the motion vectors. Our major contribution involves computing Histogram of Oriented Motion Vectors (HOMV) for overlapping hierarchical Space-Time cubes. The Space-Time cubes selected are partially overlapped. HOMV is found to be very effective to define the motion characteristics of these cubes. We then use Bag of Features (B OF) approach to define the video as histogram of HOMV keywords, obtained using k-means clustering. The video feature, thus computed, is found to be very effective in classifying videos. We demonstrate our results with experiments on two large publicly available video database.
Resumo:
Sparse representation based classification (SRC) is one of the most successful methods that has been developed in recent times for face recognition. Optimal projection for Sparse representation based classification (OPSRC)1] provides a dimensionality reduction map that is supposed to give optimum performance for SRC framework. However, the computational complexity involved in this method is too high. Here, we propose a new projection technique using the data scatter matrix which is computationally superior to the optimal projection method with comparable classification accuracy with respect OPSRC. The performance of the proposed approach is benchmarked with various publicly available face database.
Missing (in-situ) snow cover data hampers climate change and runoff studies in the Greater Himalayas
Resumo:
The Himalayas are presently holding the largest ice masses outside the polar regions and thus (temporarily) store important freshwater resources. In contrast to the contemplation of glaciers, the role of runoff from snow cover has received comparably little attention in the past, although (i) its contribution is thought to be at least equally or even more important than that of ice melt in many Himalayan catchments and (ii) climate change is expected to have widespread and significant consequences on snowmelt runoff. Here, we show that change assessment of snowmelt runoff and its timing is not as straightforward as often postulated, mainly as larger partial pressure of H2O, CO2, CH4, and other greenhouse gases might increase net long-wave input for snowmelt quite significantly in a future atmosphere. In addition, changes in the short-wave energy balance such as the pollution of the snow cover through black carbon or the sensible or latent heat contribution to snowmelt are likely to alter future snowmelt and runoff characteristics as well. For the assessment of snow cover extent and depletion, but also for its monitoring over the extremely large areas of the Himalayas, remote sensing has been used in the past and is likely to become even more important in the future. However, for the calibration and validation of remotely-sensed data, and even-more so in light of possible changes in snow-cover energy balance, we strongly call for more in-situ measurements across the Himalayas, in particular for daily data on new snow and snow cover water equivalent, or the respective energy balance components. Moreover, data should be made accessible to the scientific community, so that the latter can more accurately estimate climate change impacts on Himalayan snow cover and possible consequences thereof on runoff. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
The impact of future climate change on the glaciers in the Karakoram and Himalaya (KH) is investigated using CMIP5 multi-model temperature and precipitation projections, and a relationship between glacial accumulation-area ratio and mass balance developed for the region based on the last 30 to 40 years of observational data. We estimate that the current glacial mass balance (year 2000) for the entire KH region is -6.6 +/- 1 Gta(-1), which decreases about sixfold to -35 +/- 2 Gta(-1) by the 2080s under the high emission scenario of RCP8.5. However, under the low emission scenario of RCP2.6 the glacial mass loss only doubles to -12 +/- 2 Gta(-1) by the 2080s. We also find that 10.6 and 27 % of the glaciers could face `eventual disappearance' by the end of the century under RCP2.6 and RCP8.5 respectively, underscoring the threat to water resources under high emission scenarios.
Resumo:
Maximum entropy approach to classification is very well studied in applied statistics and machine learning and almost all the methods that exists in literature are discriminative in nature. In this paper, we introduce a maximum entropy classification method with feature selection for large dimensional data such as text datasets that is generative in nature. To tackle the curse of dimensionality of large data sets, we employ conditional independence assumption (Naive Bayes) and we perform feature selection simultaneously, by enforcing a `maximum discrimination' between estimated class conditional densities. For two class problems, in the proposed method, we use Jeffreys (J) divergence to discriminate the class conditional densities. To extend our method to the multi-class case, we propose a completely new approach by considering a multi-distribution divergence: we replace Jeffreys divergence by Jensen-Shannon (JS) divergence to discriminate conditional densities of multiple classes. In order to reduce computational complexity, we employ a modified Jensen-Shannon divergence (JS(GM)), based on AM-GM inequality. We show that the resulting divergence is a natural generalization of Jeffreys divergence to a multiple distributions case. As far as the theoretical justifications are concerned we show that when one intends to select the best features in a generative maximum entropy approach, maximum discrimination using J-divergence emerges naturally in binary classification. Performance and comparative study of the proposed algorithms have been demonstrated on large dimensional text and gene expression datasets that show our methods scale up very well with large dimensional datasets.