974 resultados para Mismatched uncertainties
Resumo:
General circulation models (GCMs) use transient climate simulations to predict climate conditions in the future. Coarse-grid resolutions and process uncertainties necessitate the use of downscaling models to simulate precipitation. However, in the downscaling models, with multiple GCMs now available, selecting an atmospheric variable from a particular model which is representative of the ensemble mean becomes an important consideration. The variable convergence score (VCS) provides a simple yet meaningful approach to address this issue, providing a mechanism to evaluate variables against each other with respect to the stability they exhibit in future climate simulations. In this study, VCS methodology is applied to 10 atmospheric variables of particular interest in downscaling precipitation over India and also on a regional basis. The nested bias-correction methodology is used to remove the systematic biases in the GCMs simulations, and a single VCS curve is developed for the entire country. The generated VCS curve is expected to assist in quantifying the variable performance across different GCMs, thus reducing the uncertainty in climate impact-assessment studies. The results indicate higher consistency across GCMs for pressure and temperature, and lower consistency for precipitation and related variables. Regional assessments, while broadly consistent with the overall results, indicate low convergence in atmospheric attributes for the Northeastern parts of India.
Resumo:
Monte Carlo modeling of light transport in multilayered tissue (MCML) is modified to incorporate objects of various shapes (sphere, ellipsoid, cylinder, or cuboid) with a refractive-index mismatched boundary. These geometries would be useful for modeling lymph nodes, tumors, blood vessels, capillaries, bones, the head, and other body parts. Mesh-based Monte Carlo (MMC) has also been used to compare the results from the MCML with embedded objects (MCML-EO). Our simulation assumes a realistic tissue model and can also handle the transmission/reflection at the object-tissue boundary due to the mismatch of the refractive index. Simulation of MCML-EO takes a few seconds, whereas MMC takes nearly an hour for the same geometry and optical properties. Contour plots of fluence distribution from MCML-EO and MMC correlate well. This study assists one to decide on the tool to use for modeling light propagation in biological tissue with objects of regular shapes embedded in it. For irregular inhomogeneity in the model (tissue), MMC has to be used. If the embedded objects (inhomogeneity) are of regular geometry (shapes), then MCML-EO is a better option, as simulations like Raman scattering, fluorescent imaging, and optical coherence tomography are currently possible only with MCML. (C) 2014 Society of Photo-Optical Instrumentation Engineers (SPIE)
Resumo:
This paper considers cooperative spectrum sensing algorithms for Cognitive Radios which focus on reducing the number of samples to make a reliable detection. We propose algorithms based on decentralized sequential hypothesis testing in which the Cognitive Radios sequentially collect the observations, make local decisions and send them to the fusion center for further processing to make a final decision on spectrum usage. The reporting channel between the Cognitive Radios and the fusion center is assumed more realistically as a Multiple Access Channel (MAC) with receiver noise. Furthermore the communication for reporting is limited, thereby reducing the communication cost. We start with an algorithm where the fusion center uses an SPRT-like (Sequential Probability Ratio Test) procedure and theoretically analyze its performance. Asymptotically, its performance is close to the optimal centralized test without fusion center noise. We further modify this algorithm to improve its performance at practical operating points. Later we generalize these algorithms to handle uncertainties in SNR and fading. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
Although uncertainties in material properties have been addressed in the design of flexible pavements, most current modeling techniques assume that pavement layers are homogeneous. The paper addresses the influence of the spatial variability of the resilient moduli of pavement layers by evaluating the effect of the variance and correlation length on the pavement responses to loading. The integration of the spatially varying log-normal random field with the finite-difference method has been achieved through an exponential autocorrelation function. The variation in the correlation length was found to have a marginal effect on the mean values of the critical strains and a noticeable effect on the standard deviation which decreases with decreases in correlation length. This reduction in the variance arises because of the spatial averaging phenomenon over the softer and stiffer zones generated because of spatial variability. The increase in the mean value of critical strains with decreasing correlation length, although minor, illustrates that pavement performance is adversely affected by the presence of spatially varying layers. The study also confirmed that the higher the variability in the pavement layer moduli, introduced through a higher value of coefficient of variation (COV), the higher the variability in the pavement response. The study concludes that ignoring spatial variability by modeling the pavement layers as homogeneous that have very short correlation lengths can result in the underestimation of the critical strains and thus an inaccurate assessment of the pavement performance. (C) 2014 American Society of Civil Engineers.
Resumo:
Developments in the statistical extreme value theory, which allow non-stationary modeling of changes in the frequency and severity of extremes, are explored to analyze changes in return levels of droughts for the Colorado River. The transient future return levels (conditional quantiles) derived from regional drought projections using appropriate extreme value models, are compared with those from observed naturalized streamflows. The time of detection is computed as the time at which significant differences exist between the observed and future extreme drought levels, accounting for the uncertainties in their estimates. Projections from multiple climate model-scenario combinations are considered; no uniform pattern of changes in drought quantiles is observed across all the projections. While some projections indicate shifting to another stationary regime, for many projections which are found to be non-stationary, detection of change in tail quantiles of droughts occurs within the 21st century with no unanimity in the time of detection. Earlier detection is observed in droughts levels of higher probability of exceedance. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
High wind poses a number of hazards in different areas such as structural safety, aviation, and wind energy-where low wind speed is also a concern, pollutant transport, to name a few. Therefore, usage of a good prediction tool for wind speed is necessary in these areas. Like many other natural processes, behavior of wind is also associated with considerable uncertainties stemming from different sources. Therefore, to develop a reliable prediction tool for wind speed, these uncertainties should be taken into account. In this work, we propose a probabilistic framework for prediction of wind speed from measured spatio-temporal data. The framework is based on decompositions of spatio-temporal covariance and simulation using these decompositions. A novel simulation method based on a tensor decomposition is used here in this context. The proposed framework is composed of a set of four modules, and the modules have flexibility to accommodate further modifications. This framework is applied on measured data on wind speed in Ireland. Both short-and long-term predictions are addressed.
Resumo:
Advances in forest carbon mapping have the potential to greatly reduce uncertainties in the global carbon budget and to facilitate effective emissions mitigation strategies such as REDD+ (Reducing Emissions from Deforestation and Forest Degradation). Though broad-scale mapping is based primarily on remote sensing data, the accuracy of resulting forest carbon stock estimates depends critically on the quality of field measurements and calibration procedures. The mismatch in spatial scales between field inventory plots and larger pixels of current and planned remote sensing products for forest biomass mapping is of particular concern, as it has the potential to introduce errors, especially if forest biomass shows strong local spatial variation. Here, we used 30 large (8-50 ha) globally distributed permanent forest plots to quantify the spatial variability in aboveground biomass density (AGBD in Mgha(-1)) at spatial scales ranging from 5 to 250m (0.025-6.25 ha), and to evaluate the implications of this variability for calibrating remote sensing products using simulated remote sensing footprints. We found that local spatial variability in AGBD is large for standard plot sizes, averaging 46.3% for replicate 0.1 ha subplots within a single large plot, and 16.6% for 1 ha subplots. AGBD showed weak spatial autocorrelation at distances of 20-400 m, with autocorrelation higher in sites with higher topographic variability and statistically significant in half of the sites. We further show that when field calibration plots are smaller than the remote sensing pixels, the high local spatial variability in AGBD leads to a substantial ``dilution'' bias in calibration parameters, a bias that cannot be removed with standard statistical methods. Our results suggest that topography should be explicitly accounted for in future sampling strategies and that much care must be taken in designing calibration schemes if remote sensing of forest carbon is to achieve its promise.
Resumo:
The estimation of water and solute transit times in catchments is crucial for predicting the response of hydrosystems to external forcings (climatic or anthropogenic). The hydrogeochemical signatures of tracers (either natural or anthropogenic) in streams have been widely used to estimate transit times in catchments as they integrate the various processes at stake. However, most of these tracers are well suited for catchments with mean transit times lower than about 4-5 years. Since the second half of the 20th century, the intensification of agriculture led to a general increase of the nitrogen load in rivers. As nitrate is mainly transported by groundwater in agricultural catchments, this signal can be used to estimate transit times greater than several years, even if nitrate is not a conservative tracer. Conceptual hydrological models can be used to estimate catchment transit times provided their consistency is demonstrated, based on their ability to simulate the stream chemical signatures at various time scales and catchment internal processes such as N storage in groundwater. The objective of this study was to assess if a conceptual lumped model was able to simulate the observed patterns of nitrogen concentration, at various time scales, from seasonal to pluriannual and thus if it was relevant to estimate the nitrogen transit times in headwater catchments. A conceptual lumped model, representing shallow groundwater flow as two parallel linear stores with double porosity, and riparian processes by a constant nitrogen removal function, was applied on two paired agricultural catchments which belong to the Research Observatory ORE AgrHys. The Global Likelihood Uncertainty Estimation (GLUE) approach was used to estimate parameter values and uncertainties. The model performance was assessed on (i) its ability to simulate the contrasted patterns of stream flow and stream nitrate concentrations at seasonal and inter-annual time scales, (ii) its ability to simulate the patterns observed in groundwater at the same temporal scales, and (iii) the consistency of long-term simulations using the calibrated model and the general pattern of the nitrate concentration increase in the region since the beginning of the intensification of agriculture in the 1960s. The simulated nitrate transit times were found more sensitive to climate variability than to parameter uncertainty, and average values were found to be consistent with results from others studies in the same region involving modeling and groundwater dating. This study shows that a simple model can be used to simulate the main dynamics of nitrogen in an intensively polluted catchment and then be used to estimate the transit times of these pollutants in the system which is crucial to guide mitigation plans design and assessment. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
Quantifying distributional behavior of extreme events is crucial in hydrologic designs. Intensity Duration Frequency (IDF) relationships are used extensively in engineering especially in urban hydrology, to obtain return level of extreme rainfall event for a specified return period and duration. Major sources of uncertainty in the IDF relationships are due to insufficient quantity and quality of data leading to parameter uncertainty due to the distribution fitted to the data and uncertainty as a result of using multiple GCMs. It is important to study these uncertainties and propagate them to future for accurate assessment of return levels for future. The objective of this study is to quantify the uncertainties arising from parameters of the distribution fitted to data and the multiple GCM models using Bayesian approach. Posterior distribution of parameters is obtained from Bayes rule and the parameters are transformed to obtain return levels for a specified return period. Markov Chain Monte Carlo (MCMC) method using Metropolis Hastings algorithm is used to obtain the posterior distribution of parameters. Twenty six CMIP5 GCMs along with four RCP scenarios are considered for studying the effects of climate change and to obtain projected IDF relationships for the case study of Bangalore city in India. GCM uncertainty due to the use of multiple GCMs is treated using Reliability Ensemble Averaging (REA) technique along with the parameter uncertainty. Scale invariance theory is employed for obtaining short duration return levels from daily data. It is observed that the uncertainty in short duration rainfall return levels is high when compared to the longer durations. Further it is observed that parameter uncertainty is large compared to the model uncertainty. (C) 2015 Elsevier Ltd. All rights reserved.
Resumo:
Most of the cities in India are undergoing rapid development in recent decades, and many rural localities are undergoing transformation to urban hotspots. These developments have associated land use/land cover (LULC) change that effects runoff response from catchments, which is often evident in the form of increase in runoff peaks, volume and velocity in drain network. Often most of the existing storm water drains are in dilapidated stage owing to improper maintenance or inadequate design. The drains are conventionally designed using procedures that are based on some anticipated future conditions. Further, values of parameters/variables associated with design of the network are traditionally considered to be deterministic. However, in reality, the parameters/variables have uncertainty due to natural and/or inherent randomness. There is a need to consider the uncertainties for designing a storm water drain network that can effectively convey the discharge. The present study evaluates performance of an existing storm water drain network in Bangalore, India, through reliability analysis by Advance First Order Second Moment (AFOSM) method. In the reliability analysis, parameters that are considered to be random variables are roughness coefficient, slope and conduit dimensions. Performance of the existing network is evaluated considering three failure modes. The first failure mode occurs when runoff exceeds capacity of the storm water drain network, while the second failure mode occurs when the actual flow velocity in the storm water drain network exceeds the maximum allowable velocity for erosion control, whereas the third failure mode occurs when the minimum flow velocity is less than the minimum allowable velocity for deposition control. In the analysis, runoff generated from subcatchments of the study area and flow velocity in storm water drains are estimated using Storm Water Management Model (SWMM). Results from the study are presented and discussed. The reliability values are low under the three failure modes, indicating a need to redesign several of the conduits to improve their reliability. This study finds use in devising plans for expansion of the Bangalore storm water drain system. (C) 2015 The Authors. Published by Elsevier B.V.
Resumo:
We explore the prospects for observing CP violation in the minimal supersymmetric extension of the Standard Model (MSSM) with six CP-violating parameters, three gaugino mass phases and three phases in trilinear soft supersymmetry-breaking parameters, using the CPsuperH code combined with a geometric approach to maximise CP-violating observables subject to the experimental upper bounds on electric dipole moments. We also implement CP-conserving constraints from Higgs physics, flavour physics and the upper limits on the cosmological dark matter density and spin-independent scattering. We study possible values of observables within the constrained MSSM (CMSSM), the non-universal Higgs model (NUHM), the CPX scenario and a variant of the phenomenological MSSM (pMSSM). We find values of the CP-violating asymmetry A(CP) in b -> s gamma decay that may be as large as 3 %, so future measurements of ACP may provide independent information about CP violation in the MSSM. We find that CP-violating MSSM contributions to the B-s meson mass mixing term Delta M-Bs are in general below the present upper limit, which is dominated by theoretical uncertainties. If these could be reduced, Delta M-Bs could also provide an interesting and complementary constraint on the six CP-violating MSSM phases, enabling them all to be determined experimentally, in principle. We also find that CP violation in the h(2,3)tau(+)tau(-) and h(2,3) (t) over bart couplings can be quite large, and so may offer interesting prospects for future pp, e(+) e(-), mu(+) mu(-) and gamma gamma colliders.
Resumo:
Climate change is most likely to introduce an additional stress to already stressed water systems in developing countries. Climate change is inherently linked with the hydrological cycle and is expected to cause significant alterations in regional water resources systems necessitating measures for adaptation and mitigation. Increasing temperatures, for example, are likely to change precipitation patterns resulting in alterations of regional water availability, evapotranspirative water demand of crops and vegetation, extremes of floods and droughts, and water quality. A comprehensive assessment of regional hydrological impacts of climate change is thus necessary. Global climate model simulations provide future projections of the climate system taking into consideration changes in external forcings, such as atmospheric carbon-dioxide and aerosols, especially those resulting from anthropogenic emissions. However, such simulations are typically run at a coarse scale, and are not equipped to reproduce regional hydrological processes. This paper summarizes recent research on the assessment of climate change impacts on regional hydrology, addressing the scale and physical processes mismatch issues. Particular attention is given to changes in water availability, irrigation demands and water quality. This paper also includes description of the methodologies developed to address uncertainties in the projections resulting from incomplete knowledge about future evolution of the human-induced emissions and from using multiple climate models. Approaches for investigating possible causes of historically observed changes in regional hydrological variables are also discussed. Illustrations of all the above-mentioned methods are provided for Indian regions with a view to specifically aiding water management in India.
Resumo:
Bioenergy deployment offers significant potential for climate change mitigation, but also carries considerable risks. In this review, we bring together perspectives of various communities involved in the research and regulation of bioenergy deployment in the context of climate change mitigation: Land-use and energy experts, land-use and integrated assessment modelers, human geographers, ecosystem researchers, climate scientists and two different strands of life-cycle assessment experts. We summarize technological options, outline the state-of-the-art knowledge on various climate effects, provide an update on estimates of technical resource potential and comprehensively identify sustainability effects. Cellulosic feedstocks, increased end-use efficiency, improved land carbon-stock management and residue use, and, when fully developed, BECCS appear as the most promising options, depending on development costs, implementation, learning, and risk management. Combined heat and power, efficient biomass cookstoves and small-scale power generation for rural areas can help to promote energy access and sustainable development, along with reduced emissions. We estimate the sustainable technical potential as up to 100EJ: high agreement; 100-300EJ: medium agreement; above 300EJ: low agreement. Stabilization scenarios indicate that bioenergy may supply from 10 to 245EJyr(-1) to global primary energy supply by 2050. Models indicate that, if technological and governance preconditions are met, large-scale deployment (>200EJ), together with BECCS, could help to keep global warming below 2 degrees degrees of preindustrial levels; but such high deployment of land-intensive bioenergy feedstocks could also lead to detrimental climate effects, negatively impact ecosystems, biodiversity and livelihoods. The integration of bioenergy systems into agriculture and forest landscapes can improve land and water use efficiency and help address concerns about environmental impacts. We conclude that the high variability in pathways, uncertainties in technological development and ambiguity in political decision render forecasts on deployment levels and climate effects very difficult. However, uncertainty about projections should not preclude pursuing beneficial bioenergy options.
Resumo:
Minimization problems with respect to a one-parameter family of generalized relative entropies are studied. These relative entropies, which we term relative alpha-entropies (denoted I-alpha), arise as redundancies under mismatched compression when cumulants of compressed lengths are considered instead of expected compressed lengths. These parametric relative entropies are a generalization of the usual relative entropy (Kullback-Leibler divergence). Just like relative entropy, these relative alpha-entropies behave like squared Euclidean distance and satisfy the Pythagorean property. Minimizers of these relative alpha-entropies on closed and convex sets are shown to exist. Such minimizations generalize the maximum Renyi or Tsallis entropy principle. The minimizing probability distribution (termed forward I-alpha-projection) for a linear family is shown to obey a power-law. Other results in connection with statistical inference, namely subspace transitivity and iterated projections, are also established. In a companion paper, a related minimization problem of interest in robust statistics that leads to a reverse I-alpha-projection is studied.
Resumo:
Groundwater management involves conflicting objectives as maximization of discharge contradicts the criteria of minimum pumping cost and minimum piping cost. In addition, available data contains uncertainties such as market fluctuations, variations in water levels of wells and variations of ground water policies. A fuzzy model is to be evolved to tackle the uncertainties, and a multiobjective optimization is to be conducted to simultaneously satisfy the contradicting objectives. Towards this end, a multiobjective fuzzy optimization model is evolved. To get at the upper and lower bounds of the individual objectives, particle Swarm optimization (PSO) is adopted. The analytic element method (AEM) is employed to obtain the operating potentio metric head. In this study, a multiobjective fuzzy optimization model considering three conflicting objectives is developed using PSO and AEM methods for obtaining a sustainable groundwater management policy. The developed model is applied to a case study, and it is demonstrated that the compromise solution satisfies all the objectives with adequate levels of satisfaction. Sensitivity analysis is carried out by varying the parameters, and it is shown that the effect of any such variation is quite significant. Copyright (c) 2015 John Wiley & Sons, Ltd.