164 resultados para Probabilistic methods


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The delineation of seismic source zones plays an important role in the evaluation of seismic hazard. In most of the studies the seismic source delineation is done based on geological features. In the present study, an attempt has been made to delineate seismic source zones in the study area (south India) based on the seismicity parameters. Seismicity parameters and the maximum probable earthquake for these source zones were evaluated and were used in the hazard evaluation. The probabilistic evaluation of seismic hazard for south India was carried out using a logic tree approach. Two different types of seismic sources, linear and areal, were considered in the present study to model the seismic sources in the region more precisely. In order to properly account for the attenuation characteristics of the region, three different attenuation relations were used with different weightage factors. Seismic hazard evaluation was done for the probability of exceedance (PE) of 10% and 2% in 50 years. The spatial variation of rock level peak horizontal acceleration (PHA) and spectral acceleration (Sa) values corresponding to return periods of 475 and 2500 years for the entire study area are presented in this work. The peak ground acceleration (PGA) values at ground surface level were estimated based on different NEHRP site classes by considering local site effects.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fastest curve-fitting procedures are proposed for vertical and radial consolidations for rapid loading methods. In vertical consolidation, the next load increment can be applied at 50-60% consolidation (or even earlier if the compression index is known). In radial consolidation, the next load increment can be applied at just 10-15% consolidation. The effects of secondary consolidation on the coefficient of consolidation and ultimate settlement are minimized in both cases. A quick procedure is proposed in vertical consolidation that determines how far is calculated from the true , where is coefficient of consolidation. In radial consolidation no such procedure is required because at 10-15% the consolidation effects of secondary consolidation are already less in most inorganic soils. The proposed rapid loading methods can be used when the settlement or time of load increment is not known. The characteristic features of vertical, radial, three-dimensional, and secondary consolidations are given in terms of the rate of settlement. A relationship is proposed between the coefficient of the vertical consolidation, load increment ratio, and compression index. (C) 2013 American Society of Civil Engineers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Estimating program worst case execution time(WCET) accurately and efficiently is a challenging task. Several programs exhibit phase behavior wherein cycles per instruction (CPI) varies in phases during execution. Recent work has suggested the use of phases in such programs to estimate WCET with minimal instrumentation. However the suggested model uses a function of mean CPI that has no probabilistic guarantees. We propose to use Chebyshev's inequality that can be applied to any arbitrary distribution of CPI samples, to probabilistically bound CPI of a phase. Applying Chebyshev's inequality to phases that exhibit high CPI variation leads to pessimistic upper bounds. We propose a mechanism that refines such phases into sub-phases based on program counter(PC) signatures collected using profiling and also allows the user to control variance of CPI within a sub-phase. We describe a WCET analyzer built on these lines and evaluate it with standard WCET and embedded benchmark suites on two different architectures for three chosen probabilities, p={0.9, 0.95 and 0.99}. For p= 0.99, refinement based on PC signatures alone, reduces average pessimism of WCET estimate by 36%(77%) on Arch1 (Arch2). Compared to Chronos, an open source static WCET analyzer, the average improvement in estimates obtained by refinement is 5%(125%) on Arch1 (Arch2). On limiting variance of CPI within a sub-phase to {50%, 10%, 5% and 1%} of its original value, average accuracy of WCET estimate improves further to {9%, 11%, 12% and 13%} respectively, on Arch1. On Arch2, average accuracy of WCET improves to 159% when CPI variance is limited to 50% of its original value and improvement is marginal beyond that point.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The uncertainty in material properties and traffic characterization in the design of flexible pavements has led to significant efforts in recent years to incorporate reliability methods and probabilistic design procedures for the design, rehabilitation, and maintenance of pavements. In the mechanistic-empirical (ME) design of pavements, despite the fact that there are multiple failure modes, the design criteria applied in the majority of analytical pavement design methods guard only against fatigue cracking and subgrade rutting, which are usually considered as independent failure events. This study carries out the reliability analysis for a flexible pavement section for these failure criteria based on the first-order reliability method (FORM) and the second-order reliability method (SORM) techniques and the crude Monte Carlo simulation. Through a sensitivity analysis, the most critical parameter affecting the design reliability for both fatigue and rutting failure criteria was identified as the surface layer thickness. However, reliability analysis in pavement design is most useful if it can be efficiently and accurately applied to components of pavement design and the combination of these components in an overall system analysis. The study shows that for the pavement section considered, there is a high degree of dependence between the two failure modes, and demonstrates that the probability of simultaneous occurrence of failures can be almost as high as the probability of component failures. Thus, the need to consider the system reliability in the pavement analysis is highlighted, and the study indicates that the improvement of pavement performance should be tackled in the light of reducing this undesirable event of simultaneous failure and not merely the consideration of the more critical failure mode. Furthermore, this probability of simultaneous occurrence of failures is seen to increase considerably with small increments in the mean traffic loads, which also results in wider system reliability bounds. The study also advocates the use of narrow bounds to the probability of failure, which provides a better estimate of the probability of failure, as validated from the results obtained from Monte Carlo simulation (MCS).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes a novel approach to solve the ordinal regression problem using Gaussian processes. The proposed approach, probabilistic least squares ordinal regression (PLSOR), obtains the probability distribution over ordinal labels using a particular likelihood function. It performs model selection (hyperparameter optimization) using the leave-one-out cross-validation (LOO-CV) technique. PLSOR has conceptual simplicity and ease of implementation of least squares approach. Unlike the existing Gaussian process ordinal regression (GPOR) approaches, PLSOR does not use any approximation techniques for inference. We compare the proposed approach with the state-of-the-art GPOR approaches on some synthetic and benchmark data sets. Experimental results show the competitiveness of the proposed approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A review of high operating temperature (HOT) infrared (IR) photon detector technology vis-a-vis material requirements, device design and state of the art achieved is presented in this article. The HOT photon detector concept offers the promise of operation at temperatures above 120 K to near room temperature. Advantages are reduction in system size, weight, cost and increase in system reliability. A theoretical study of the thermal generation-recombination (g-r) processes such as Auger and defect related Shockley Read Hall (SRH) recombination responsible for increasing dark current in HgCdTe detectors is presented. Results of theoretical analysis are used to evaluate performance of long wavelength (LW) and mid wavelength (MW) IR detectors at high operating temperatures. (C) 2013 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this article, we derive an a posteriori error estimator for various discontinuous Galerkin (DG) methods that are proposed in (Wang, Han and Cheng, SIAM J. Numer. Anal., 48: 708-733, 2010) for an elliptic obstacle problem. Using a key property of DG methods, we perform the analysis in a general framework. The error estimator we have obtained for DG methods is comparable with the estimator for the conforming Galerkin (CG) finite element method. In the analysis, we construct a non-linear smoothing function mapping DG finite element space to CG finite element space and use it as a key tool. The error estimator consists of a discrete Lagrange multiplier associated with the obstacle constraint. It is shown for non-over-penalized DG methods that the discrete Lagrange multiplier is uniformly stable on non-uniform meshes. Finally, numerical results demonstrating the performance of the error estimator are presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Energy research is to a large extent materials research, encompassing the physics and chemistry of materials, including their synthesis, processing toward components and design toward architectures, allowing for their functionality as energy devices, extending toward their operation parameters and environment, including also their degradation, limited life, ultimate failure and potential recycling. In all these stages, X-ray and electron spectroscopy are helpful methods for analysis, characterization and diagnostics for the engineer and for the researcher working in basic science.This paper gives a short overview of experiments with X-ray and electron spectroscopy for solar energy and water splitting materials and addresses also the issue of solar fuel, a relatively new topic in energy research. The featured systems are iron oxide and tungsten oxide as photoanodes, and hydrogenases as molecular systems. We present surface and subsurface studies with ambient pressure XPS and hard X-ray XPS, resonant photoemission, light induced effects in resonant photoemission experiments and a photo-electrochemical in situ/operando NEXAFS experiment in a liquid cell, and nuclear resonant vibrational spectroscopy (NRVS). (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Structural dynamics of dendritic spines is one of the key correlative measures of synaptic plasticity for encoding short-term and long-term memory. Optical studies of structural changes in brain tissue using confocal microscopy face difficulties of scattering. This results in low signal-to-noise ratio and thus limiting the imaging depth to few tens of microns. Multiphoton microscopy (MpM) overcomes this limitation by using low-energy photons to cause localized excitation and achieve high resolution in all three dimensions. Multiple low-energy photons with longer wavelengths minimize scattering and allow access to deeper brain regions at several hundred microns. In this article, we provide a basic understanding of the physical phenomena that give MpM an edge over conventional microscopy. Further, we highlight a few of the key studies in the field of learning and memory which would not have been possible without the advent of MpM.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The sparse recovery methods utilize the l(p)-normbased regularization in the estimation problem with 0 <= p <= 1. These methods have a better utility when the number of independent measurements are limited in nature, which is a typical case for diffuse optical tomographic image reconstruction problem. These sparse recovery methods, along with an approximation to utilize the l(0)-norm, have been deployed for the reconstruction of diffuse optical images. Their performancewas compared systematically using both numerical and gelatin phantom cases to show that these methods hold promise in improving the reconstructed image quality.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study presents the response of a vertically loaded pile in undrained clay considering spatially distributed undrained shear strength. The probabilistic study is performed considering undrained shear strength as random variable and the analysis is conducted using random field theory. The inherent soil variability is considered as source of variability and the field is modeled as two dimensional non-Gaussian homogeneous random field. Random field is simulated using Cholesky decomposition technique within the finite difference program and Monte Carlo simulation approach is considered for the probabilistic analysis. The influence of variance and spatial correlation of undrained shear strength on the ultimate capacity as summation of ultimate skin friction and end bearing resistance of pile are examined. It is observed that the coefficient of variation and spatial correlation distance are the most important parameters that affect the pile ultimate capacity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Structural Support Vector Machines (SSVMs) and Conditional Random Fields (CRFs) are popular discriminative methods used for classifying structured and complex objects like parse trees, image segments and part-of-speech tags. The datasets involved are very large dimensional, and the models designed using typical training algorithms for SSVMs and CRFs are non-sparse. This non-sparse nature of models results in slow inference. Thus, there is a need to devise new algorithms for sparse SSVM and CRF classifier design. Use of elastic net and L1-regularizer has already been explored for solving primal CRF and SSVM problems, respectively, to design sparse classifiers. In this work, we focus on dual elastic net regularized SSVM and CRF. By exploiting the weakly coupled structure of these convex programming problems, we propose a new sequential alternating proximal (SAP) algorithm to solve these dual problems. This algorithm works by sequentially visiting each training set example and solving a simple subproblem restricted to a small subset of variables associated with that example. Numerical experiments on various benchmark sequence labeling datasets demonstrate that the proposed algorithm scales well. Further, the classifiers designed are sparser than those designed by solving the respective primal problems and demonstrate comparable generalization performance. Thus, the proposed SAP algorithm is a useful alternative for sparse SSVM and CRF classifier design.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The electronic structure of Nd1-xYxMnO3 (x-0-0.5) is studied using x-ray absorption near-edge structure (XANES) spectroscopy at the Mn K-edge along with the DFT-based LSDA+U and real space cluster calculations. The main edge of the spectra does not show any variation with doping. The pre-edge shows two distinct features which appear well-separated with doping. The intensity of the pre-edge decreases with doping. The theoretical XANES were calculated using real space multiple scattering methods which reproduces the entire experimental spectra at the main edge as well as the pre-edge. Density functional theory calculations are used to obtain the Mn 4p, Mn 3d and O 2p density of states. For x=0, the site-projected density of states at 1.7 eV above Fermi energy shows a singular peak of unoccupied e(g) (spin-up) states which is hybridized Mn 4p and O 2p states. For x=0.5, this feature develops at a higher energy and is highly delocalized and overlaps with the 3d spin-down states which changes the pre-edge intensity. The Mn 4p DOS for both compositions, show considerable difference between the individual p(x), p(y) and p(z)), states. For x=0.5, there is a considerable change in the 4p orbital polarization suggesting changes in the Jahn-Teller effect with doping. (C) 2013 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective identification and description of mimicked calls is a primary component of any study on avian vocal mimicry but few studies have adopted a quantitative approach. We used spectral feature representations commonly used in human speech analysis in combination with various distance metrics to distinguish between mimicked and non-mimicked calls of the greater racket-tailed drongo, Dicrurus paradiseus and cross-validated the results with human assessment of spectral similarity. We found that the automated method and human subjects performed similarly in terms of the overall number of correct matches of mimicked calls to putative model calls. However, the two methods also misclassified different subsets of calls and we achieved a maximum accuracy of ninety five per cent only when we combined the results of both the methods. This study is the first to use Mel-frequency Cepstral Coefficients and Relative Spectral Amplitude - filtered Linear Predictive Coding coefficients to quantify vocal mimicry. Our findings also suggest that in spite of several advances in automated methods of song analysis, corresponding cross-validation by humans remains essential.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A number of ecosystems can exhibit abrupt shifts between alternative stable states. Because of their important ecological and economic consequences, recent research has focused on devising early warning signals for anticipating such abrupt ecological transitions. In particular, theoretical studies show that changes in spatial characteristics of the system could provide early warnings of approaching transitions. However, the empirical validation of these indicators lag behind their theoretical developments. Here, we summarize a range of currently available spatial early warning signals, suggest potential null models to interpret their trends, and apply them to three simulated spatial data sets of systems undergoing an abrupt transition. In addition to providing a step-by-step methodology for applying these signals to spatial data sets, we propose a statistical toolbox that may be used to help detect approaching transitions in a wide range of spatial data. We hope that our methodology together with the computer codes will stimulate the application and testing of spatial early warning signals on real spatial data.