155 resultados para BDH assumption


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The component and system reliability based design of bridge abutments under earthquake loading is presented in the paper. Planar failure surface has been used in conjunction with pseudo-dynamic approach to compute seismic active earth pressures on an abutment. The pseudo-dynamic method, considers the effect of phase difference in shear waves, soil amplification along with the horizontal seismic accelerations, strain localization in backfill soil and associated post-peak reduction in the shear resistance from peak to residual values along a previously formed failure plane. Four modes of stability viz. sliding, overturning, eccentricity and bearing capacity of the foundation soil are considered in the analysis. The series system reliability is computed with an assumption of independent failure modes. The lower and upper bounds of system reliability are also computed by taking into account the correlations between four failure modes, which is evaluated using the direction cosines of the tangent planes at the most probable points of failure.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Scenic word images undergo degradations due to motion blur, uneven illumination, shadows and defocussing, which lead to difficulty in segmentation. As a result, the recognition results reported on the scenic word image datasets of ICDAR have been low. We introduce a novel technique, where we choose the middle row of the image as a sub-image and segment it first. Then, the labels from this segmented sub-image are used to propagate labels to other pixels in the image. This approach, which is unique and distinct from the existing methods, results in improved segmentation. Bayesian classification and Max-flow methods have been independently used for label propagation. This midline based approach limits the impact of degradations that happens to the image. The segmented text image is recognized using the trial version of Omnipage OCR. We have tested our method on ICDAR 2003 and ICDAR 2011 datasets. Our word recognition results of 64.5% and 71.6% are better than those of methods in the literature and also methods that competed in the Robust reading competition. Our method makes an implicit assumption that degradation is not present in the middle row.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With no Channel State Information (CSI) at the users, transmission over the two-user Gaussian Multiple Access Channel with fading and finite constellation at the input, will have high error rates due to multiple access interference (MAI). However, perfect CSI at the users is an unrealistic assumption in the wireless scenario, as it would involve extremely large feedback overheads. In this paper we propose a scheme which removes the adverse effect of MAI using only quantized knowledge of fade state at the transmitters such that the associated overhead is nominal. One of the users rotates its constellation relative to the other without varying the transmit power to adapt to the existing channel conditions, in order to meet certain predetermined minimum Euclidean distance requirement in the equivalent constellation at the destination. The optimal rotation scheme is described for the case when both the users use symmetric M-PSK constellations at the input, where M = 2(gimel), gimel being a positive integer. The strategy is illustrated by considering the example where both the users use QPSK signal sets at the input. The case when the users use PSK constellations of different sizes is also considered. It is shown that the proposed scheme has considerable better error performance compared to the conventional non-adaptive scheme, at the cost of a feedback overhead of just log log(2) (M-2/8 - M/4 + 2)] + 1 bits, for the M-PSK case.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

For one-dimensional flexible objects such as ropes, chains, hair, the assumption of constant length is realistic for large-scale 3D motion. Moreover, when the motion or disturbance at one end gradually dies down along the curve defining the one-dimensional flexible objects, the motion appears ``natural''. This paper presents a purely geometric and kinematic approach for deriving more natural and length-preserving transformations of planar and spatial curves. Techniques from variational calculus are used to determine analytical conditions and it is shown that the velocity at any point on the curve must be along the tangent at that point for preserving the length and to yield the feature of diminishing motion. It is shown that for the special case of a straight line, the analytical conditions lead to the classical tractrix curve solution. Since analytical solutions exist for a tractrix curve, the motion of a piecewise linear curve can be solved in closed-form and thus can be applied for the resolution of redundancy in hyper-redundant robots. Simulation results for several planar and spatial curves and various input motions of one end are used to illustrate the features of motion damping and eventual alignment with the perturbation vector.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Wind stress is the most important ocean forcing for driving tropical surface currents. Stress can be estimated from scatterometer-reported wind measurements at 10 m that have been extrapolated to the surface, assuming a neutrally stable atmosphere and no surface current. Scatterometer calibration is designed to account for the assumption of neutral stability; however, the assumption of a particular sea state and negligible current often introduces an error in wind stress estimations. Since the fundamental scatterometer measurement is of the surface radar backscatter (sigma-0) which is related to surface roughness and, thus, stress, we develop a method to estimate wind stress directly from the scatterometer measurements of sigma-0 and their associated azimuth angle and incidence angle using a neural network approach. We compare the results with in situ estimations and observe that the wind stress estimations from this approach are more accurate compared with those obtained from the conventional estimations using 10-m-height wind measurements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Latent variable methods, such as PLCA (Probabilistic Latent Component Analysis) have been successfully used for analysis of non-negative signal representations. In this paper, we formulate PLCS (Probabilistic Latent Component Segmentation), which models each time frame of a spectrogram as a spectral distribution. Given the signal spectrogram, the segmentation boundaries are estimated using a maximum-likelihood approach. For an efficient solution, the algorithm imposes a hard constraint that each segment is modelled by a single latent component. The hard constraint facilitates the solution of ML boundary estimation using dynamic programming. The PLCS framework does not impose a parametric assumption unlike earlier ML segmentation techniques. PLCS can be naturally extended to model coarticulation between successive phones. Experiments on the TIMIT corpus show that the proposed technique is promising compared to most state of the art speech segmentation algorithms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Auction based mechanisms have become popular in industrial procurement settings. These mechanisms minimize the cost of procurement and at the same time achieve desirable properties such as truthful bidding by the suppliers. In this paper, we investigate the design of truthful procurement auctions taking into account an additional important issue namely carbon emissions. In particular, we focus on the following procurement problem: A buyer wishes to source multiple units of a homogeneous item from several competing suppliers who offer volume discount bids and who also provide emission curves that specify the cost of emissions as a function of volume of supply. We assume that emission curves are reported truthfully since that information is easily verifiable through standard sources. First we formulate the volume discount procurement auction problem with emission constraints under the assumption that the suppliers are honest (that is they report production costs truthfully). Next we describe a mechanism design formulation for green procurement with strategic suppliers. Our numerical experimentation shows that emission constraints can significantly alter sourcing decisions and affect the procurement costs dramatically. To the best of our knowledge, this is the first effort in explicitly taking into account carbon emissions in planning procurement auctions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

By applying the lower bound theorem of limit analysis in conjunction with finite elements and nonlinear optimization, the bearing capacity factor N has been computed for a rough strip footing by incorporating pseudostatic horizontal seismic body forces. As compared with different existing approaches, the present analysis is more rigorous, because it does not require an assumption of either the failure mechanism or the variation of the ratio of the shear to the normal stress along the footing-soil interface. The magnitude of N decreases considerably with an increase in the horizontal seismic acceleration coefficient (kh). With an increase in kh, a continuous spread in the extent of the plastic zone toward the direction of the horizontal seismic body force is noted. The results obtained from this paper have been found to compare well with the solutions reported in the literature. (C) 2013 American Society of Civil Engineers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Estimation of design quantiles of hydrometeorological variables at critical locations in river basins is necessary for hydrological applications. To arrive at reliable estimates for locations (sites) where no or limited records are available, various regional frequency analysis (RFA) procedures have been developed over the past five decades. The most widely used procedure is based on index-flood approach and L-moments. It assumes that values of scale and shape parameters of frequency distribution are identical across all the sites in a homogeneous region. In real-world scenario, this assumption may not be valid even if a region is statistically homogeneous. To address this issue, a novel mathematical approach is proposed. It involves (i) identification of an appropriate frequency distribution to fit the random variable being analyzed for homogeneous region, (ii) use of a proposed transformation mechanism to map observations of the variable from original space to a dimensionless space where the form of distribution does not change, and variation in values of its parameters is minimal across sites, (iii) construction of a growth curve in the dimensionless space, and (iv) mapping the curve to the original space for the target site by applying inverse transformation to arrive at required quantile(s) for the site. Effectiveness of the proposed approach (PA) in predicting quantiles for ungauged sites is demonstrated through Monte Carlo simulation experiments considering five frequency distributions that are widely used in RFA, and by case study on watersheds in conterminous United States. Results indicate that the PA outperforms methods based on index-flood approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An attempt to study the fluid dynamic behavior of two phase flow comprising of solid and liquid with nearly equal density in a geometrical case that has an industrial significance in theareas like processing of polymers, food, pharma ceutical, paints. In this work,crystalline silica is considered as the dispersed medium in glycerin. In the CFD analysis carried out,the two phase components are considered to be premixed homogeneously at the initial state. The flow in a cylinder that has an axially driven bi-lobe rotor, a typical blender used in polymer industry for mixing or kneading to render the multi-component mixture to homogeneous condition is considered. A viscous, incompressible, isothermal flow is considered with an assumption that the components do not undergo any physical change and the solids are rigid and mix in fully wetting conditions. Silica with a particle diameter of 0.4 mm is considered and flow is analyzed for different mixing fractions. An industry standard CFD code is used for solving 3D-RANS equations. As the outcome of the study the torque demand by the bi-lobe rotor for different mixture fractions which are estimated show a behavioral consistency to the expected physical phenomena occurring in the domain considered.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There have been attempts at obtaining robust guidance laws to ensure zero miss distance (ZMD) for interceptors with parametric uncertainties. All these laws require the plant to be of minimum phase type to enable the overall guidance loop transfer function to satisfy strict positive realness (SPR). The SPR property implies absolute stability of the closed loop system, and has been shown in the literature to lead to ZMD because it avoids saturation of lateral acceleration. In these works higher order interceptors are reduced to lower order equivalent models for which control laws are designed to ensure ZMD. However, it has also been shown that when the original system with right half plane (RHP) zeros is considered, the resulting miss distances, using such strategies, can be quite high. In this paper, an alternative approach using the circle criterion establishes the conditions for absolute stability of the guidance loop and relaxes the conservative nature of some earlier results arising from assumption of in�nite engagement time. Further, a feedforward scheme in conjunction with a lead-lag compensator is used as one control strategy while a generalized sampled hold function is used as a second strategy, to shift the RHP transmission zeros, thereby achieving ZMD. It is observed that merely shifting the RHP zero(s) to the left half plane reduces miss distances signi�cantly even when no additional controllers are used to ensure SPR conditions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a novel, soft computing based solution to a complex optimal control or dynamic optimization problem that requires the solution to be available in real-time. The complexities in this problem of optimal guidance of interceptors launched with high initial heading errors include the more involved physics of a three dimensional missile-target engagement, and those posed by the assumption of a realistic dynamic model such as time-varying missile speed, thrust, drag and mass, besides gravity, and upper bound on the lateral acceleration. The classic, pure proportional navigation law is augmented with a polynomial function of the heading error, and the values of the coefficients of the polynomial are determined using differential evolution (DE). The performance of the proposed DE enhanced guidance law is compared against the existing conventional laws in the literature, on the criteria of time and energy optimality, peak lateral acceleration demanded, terminal speed and robustness to unanticipated target maneuvers, to illustrate the superiority of the proposed law. (C) 2013 Elsevier B. V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An analytical solution to describe the transient temperature distribution in a geothermal reservoir in response to injection of cold water is presented. The reservoir is composed of a confined aquifer, sandwiched between rocks of different thermo-geological properties. The heat transport processes considered are advection, longitudinal conduction in the geothermal aquifer, and the conductive heat transfer to the underlying and overlying rocks of different geological properties. The one-dimensional heat transfer equation has been solved using the Laplace transform with the assumption of constant density and thermal properties of both rock and fluid. Two simple solutions are derived afterwards, first neglecting the longitudinal conductive heat transport and then heat transport to confining rocks. Results show that heat loss to the confining rock layers plays a vital role in slowing down the cooling of the reservoir. The influence of some parameters, e.g. the volumetric injection rate, the longitudinal thermal conductivity and the porosity of the porous media, on the transient heat transport phenomenon is judged by observing the variation of the transient temperature distribution with different values of the parameters. The effects of injection rate and thermal conductivity have been found to be profound on the results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Maximum entropy approach to classification is very well studied in applied statistics and machine learning and almost all the methods that exists in literature are discriminative in nature. In this paper, we introduce a maximum entropy classification method with feature selection for large dimensional data such as text datasets that is generative in nature. To tackle the curse of dimensionality of large data sets, we employ conditional independence assumption (Naive Bayes) and we perform feature selection simultaneously, by enforcing a `maximum discrimination' between estimated class conditional densities. For two class problems, in the proposed method, we use Jeffreys (J) divergence to discriminate the class conditional densities. To extend our method to the multi-class case, we propose a completely new approach by considering a multi-distribution divergence: we replace Jeffreys divergence by Jensen-Shannon (JS) divergence to discriminate conditional densities of multiple classes. In order to reduce computational complexity, we employ a modified Jensen-Shannon divergence (JS(GM)), based on AM-GM inequality. We show that the resulting divergence is a natural generalization of Jeffreys divergence to a multiple distributions case. As far as the theoretical justifications are concerned we show that when one intends to select the best features in a generative maximum entropy approach, maximum discrimination using J-divergence emerges naturally in binary classification. Performance and comparative study of the proposed algorithms have been demonstrated on large dimensional text and gene expression datasets that show our methods scale up very well with large dimensional datasets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The well-known classical nucleation theory (CNT) for the free energy barrier towards formation of a nucleus of critical size of the new stable phase within the parent metastable phase fails to take into account the influence of other metastable phases having density/order intermediate between the parent metastable phase and the final stable phase. This lacuna can be more serious than capillary approximation or spherical shape assumption made in CNT. This issue is particularly significant in ice nucleation because liquid water shows rich phase diagram consisting of two (high and low density) liquid phases in supercooled state. The explanations of thermodynamic and dynamic anomalies of supercooled water often invoke the possible influence of a liquid-liquid transition between two metastable liquid phases. To investigate both the role of thermodynamic anomalies and presence of distinct metastable liquid phases in supercooled water on ice nucleation, we employ density functional theoretical approach to find nucleation free energy barrier in different regions of phase diagram. The theory makes a number of striking predictions, such as a dramatic lowering of nucleation barrier due to presence of a metastable intermediate phase and crossover in the dependence of free energy barrier on temperature near liquid-liquid critical point. These predictions can be tested by computer simulations as well as by controlled experiments. (C) 2014 AIP Publishing LLC.