140 resultados para incorporate probabilistic techniques
Resumo:
Many downscaling techniques have been developed in the past few years for projection of station-scale hydrological variables from large-scale atmospheric variables simulated by general circulation models (GCMs) to assess the hydrological impacts of climate change. This article compares the performances of three downscaling methods, viz. conditional random field (CRF), K-nearest neighbour (KNN) and support vector machine (SVM) methods in downscaling precipitation in the Punjab region of India, belonging to the monsoon regime. The CRF model is a recently developed method for downscaling hydrological variables in a probabilistic framework, while the SVM model is a popular machine learning tool useful in terms of its ability to generalize and capture nonlinear relationships between predictors and predictand. The KNN model is an analogue-type method that queries days similar to a given feature vector from the training data and classifies future days by random sampling from a weighted set of K closest training examples. The models are applied for downscaling monsoon (June to September) daily precipitation at six locations in Punjab. Model performances with respect to reproduction of various statistics such as dry and wet spell length distributions, daily rainfall distribution, and intersite correlations are examined. It is found that the CRF and KNN models perform slightly better than the SVM model in reproducing most daily rainfall statistics. These models are then used to project future precipitation at the six locations. Output from the Canadian global climate model (CGCM3) GCM for three scenarios, viz. A1B, A2, and B1 is used for projection of future precipitation. The projections show a change in probability density functions of daily rainfall amount and changes in the wet and dry spell distributions of daily precipitation. Copyright (C) 2011 John Wiley & Sons, Ltd.
Resumo:
In this work, an attempt has been made to evaluate the spatial variation of peak horizontal acceleration (PHA) and spectral acceleration (SA) values at rock level for south India based on the probabilistic seismic hazard analysis (PSHA). These values were estimated by considering the uncertainties involved in magnitude, hypocentral distance and attenuation of seismic waves. Different models were used for the hazard evaluation, and they were combined together using a logic tree approach. For evaluating the seismic hazard, the study area was divided into small grids of size 0.1A degrees A xA 0.1A degrees, and the hazard parameters were calculated at the centre of each of these grid cells by considering all the seismic sources within a radius of 300 km. Rock level PHA values and SA at 1 s corresponding to 10% probability of exceedance in 50 years were evaluated for all the grid points. Maps showing the spatial variation of rock level PHA values and SA at 1 s for the entire south India are presented in this paper. To compare the seismic hazard for some of the important cities, the seismic hazard curves and the uniform hazard response spectrum (UHRS) at rock level with 10% probability of exceedance in 50 years are also presented in this work.
Resumo:
Lime-fly ash mixtures are exploited for the manufacture of fly ash bricks finding applications in load bearing masonry. Lime-pozzolana reactions take place at a slow pace under ambient temperature conditions and hence very long curing durations are required to achieve meaningful strength values. The present investigation examines the improvements in strength development in lime-fly ash compacts through low temperature steam curing and use of additives like gypsum. Results of density-strength-moulding water content relationships, influence of lime-fly ash ratio, steam curing and role of gypsum on strength development, and characteristics of compacted lime-fly ash-gypsum bricks have been discussed. The test results reveal that (a) strength increases with increase in density irrespective of lime content, type of curing and moulding water content, (b) optimum lime-fly ash ratio yielding maximum strength is about 0.75 in the normal curing conditions, (c) 24 h of steam curing (at 80A degrees C) is sufficient to achieve nearly possible maximum strength, (d) optimum gypsum content yielding maximum compressive strength is at 2%, (e) with gypsum additive it is possible to obtain lime-fly ash bricks or blocks having sufficient strength (> 10 MPa) at 28 days of normal wet burlap curing.
Resumo:
The Indian Summer Monsoon (ISM) precipitation recharges ground water aquifers in a large portion of the Indian subcontinent. Monsoonal precipitation over the Indian region brings moisture from the Arabian Sea and the Bay of Bengal (BoB). A large difference in the salinity of these two reservoirs, owing to the large amount of freshwater discharge from the continental rivers in the case of the BoB and dominating evaporation processes over the Arabian Sea region, allows us to distinguish the isotopic signatures in water originating in these two water bodies. Most bottled water manufacturers exploit the natural resources of groundwater, replenished by the monsoonal precipitation, for bottling purposes. The work presented here relates the isotopic ratios of bottled water to latitude, moisture source and seasonality in precipitation isotope ratios. We investigated the impact of the above factors on the isotopic composition of bottled water. The result shows a strong relationship between isotope ratios in precipitation (obtained from the GNIP data base)/bottled water with latitude. The approach can be used to predict the latitude at which the bottled water was manufactured. The paper provides two alternative approaches to address the site prediction. The limitations of this approach in identifying source locations and the uncertainty in latitude estimations are discussed. Furthermore, the method provided here can also be used as an important forensic tool for exploring the source location of bottled water from other regions. Copyright (C) 2011 John Wiley & Sons, Ltd.
Resumo:
Dial-a-ride problem (DARP) is an optimization problem which deals with the minimization of the cost of the provided service where the customers are provided a door-to-door service based on their requests. This optimization model presented in earlier studies, is considered in this study. Due to the non-linear nature of the objective function the traditional optimization methods are plagued with the problem of converging to a local minima. To overcome this pitfall we use metaheuristics namely Simulated Annealing (SA), Particle Swarm Optimization (PSO), Genetic Algorithm (GA) and Artificial Immune System (AIS). From the results obtained, we conclude that Artificial Immune System method effectively tackles this optimization problem by providing us with optimal solutions. Crown Copyright (C) 2011 Published by Elsevier Ltd. All rights reserved.
Resumo:
Diatoms are regarded as useful neutral lipid sources, as liquid fuel precursors, as foods for marine culture of zooplankters, larval and post-larval shrimp, copepods, juvenile oysters and as micromachines in nanotechnology. Combining microscopic observation with in situ culturing has been useful in areas of taxonomy, ecology, biomonitoring, biotechnology, etc. This communication reviews various culturing techniques of marine diatoms with the relative merits.
Resumo:
This paper presents a novel algorithm for compression of single lead Electrocardiogram (ECG) signals. The method is based on Pole-Zero modelling of the Discrete Cosine Transformed (DCT) signal. An extension is proposed to the well known Steiglitz-Hcbride algorithm, to model the higher frequency components of the input signal more accurately. This is achieved by weighting the error function minimized by the algorithm to estimate the model parameters. The data compression achieved by the parametric model is further enhanced by Differential Pulse Code Modulation (DPCM) of the model parameters. The method accomplishes a compression ratio in the range of 1:20 to 1:40, which far exceeds those achieved by most of the current methods.
Resumo:
The problem of on-line recognition and retrieval of relatively weak industrial signals such as partial discharges (PD), buried in excessive noise, has been addressed in this paper. The major bottleneck being the recognition and suppression of stochastic pulsive interference (PI) due to the overlapping broad band frequency spectrum of PI and PD pulses. Therefore, on-line, onsite, PD measurement is hardly possible in conventional frequency based DSP techniques. The observed PD signal is modeled as a linear combination of systematic and random components employing probabilistic principal component analysis (PPCA) and the pdf of the underlying stochastic process is obtained. The PD/PI pulses are assumed as the mean of the process and modeled instituting non-parametric methods, based on smooth FIR filters, and a maximum aposteriori probability (MAP) procedure employed therein, to estimate the filter coefficients. The classification of the pulses is undertaken using a simple PCA classifier. The methods proposed by the authors were found to be effective in automatic retrieval of PD pulses completely rejecting PI.
Resumo:
The paper reports the development of new amplitude-comparator techniques which allow the instantaneous comparison of the amplitude of the signals derived from primary line quantities. These techniques are used to derive a variety of impedance characteristics. The merits of the new relaying system are: the simple mode of the relay circuitry, the derivation of closed polar characteristics (i.e. quadrilateral) by a single measuring gate and sharp discontinuities in the polar characteristics. Design principles and circuit models in their schematic form are described and, in addition, a comprehensive theoretical basis for comparison is also presented. Dynamic test results are presented for a quadrilateral characteristic of potentially general application.
Resumo:
This paper obtains a new accurate model for sensitivity in power systems and uses it in conjunction with linear programming for the solution of load-shedding problems with a minimum loss of loads. For cases where the error in the sensitivity model increases, other linear programming and quadratic programming models have been developed, assuming currents at load buses as variables and not load powers. A weighted error criterion has been used to take priority schedule into account; it can be either a linear or a quadratic function of the errors, and depending upon the function appropriate programming techniques are to be employed.