912 resultados para Error Correction Models
Resumo:
Gabion faced re.taining walls are essentially semi rigid structures that can generally accommodate large lateral and vertical movements without excessive structural distress. Because of this inherent feature, they offer technical and economical advantage over the conventional concrete gravity retaining walls. Although they can be constructed either as gravity type or reinforced soil type, this work mainly deals with gabion faced reinforced earth walls as they are more suitable to larger heights. The main focus of the present investigation was the development of a viable plane strain two dimensional non linear finite element analysis code which can predict the stress - strain behaviour of gabion faced retaining walls - both gravity type and reinforced soil type. The gabion facing, backfill soil, In - situ soil and foundation soil were modelled using 20 four noded isoparametric quadrilateral elements. The confinement provided by the gabion boxes was converted into an induced apparent cohesion as per the membrane correction theory proposed by Henkel and Gilbert (1952). The mesh reinforcement was modelled using 20 two noded linear truss elements. The interactions between the soil and the mesh reinforcement as well as the facing and backfill were modelled using 20 four noded zero thickness line interface elements (Desai et al., 1974) by incorporating the nonlinear hyperbolic formulation for the tangential shear stiffness. The well known hyperbolic formulation by Ouncan and Chang (1970) was used for modelling the non - linearity of the soil matrix. The failure of soil matrix, gabion facing and the interfaces were modelled using Mohr - Coulomb failure criterion. The construction stages were also modelled.Experimental investigations were conducted on small scale model walls (both in field as well as in laboratory) to suggest an alternative fill material for the gabion faced retaining walls. The same were also used to validate the finite element programme developed as a part of the study. The studies were conducted using different types of gabion fill materials. The variation was achieved by placing coarse aggregate and quarry dust in different proportions as layers one above the other or they were mixed together in the required proportions. The deformation of the wall face was measured and the behaviour of the walls with the variation of fill materials was analysed. It was seen that 25% of the fill material in gabions can be replaced by a soft material (any locally available material) without affecting the deformation behaviour to large extents. In circumstances where deformation can be allowed to some extents, even up to 50% replacement with soft material can be possible.The developed finite element code was validated using experimental test results and other published results. Encouraged by the close comparison between the theory and experiments, an extensive and systematic parametric study was conducted, in order to gain a closer understanding of the behaviour of the system. Geometric parameters as well as material parameters were varied to understand their effect on the behaviour of the walls. The final phase of the study consisted of developing a simplified method for the design of gabion faced retaining walls. The design was based on the limit state method considering both the stability and deformation criteria. The design parameters were selected for the system and converted to dimensionless parameters. Thus the procedure for fixing the dimensions of the wall was simplified by eliminating the conventional trial and error procedure. Handy design charts were developed which would prove as a hands - on - tool to the design engineers at site. Economic studies were also conducted to prove the cost effectiveness of the structures with respect to the conventional RCC gravity walls and cost prediction models and cost breakdown ratios were proposed. The studies as a whole are expected to contribute substantially to understand the actual behaviour of gabion faced retaining wall systems with particular reference to the lateral deformations.
Resumo:
Motivation for Speaker recognition work is presented in the first part of the thesis. An exhaustive survey of past work in this field is also presented. A low cost system not including complex computation has been chosen for implementation. Towards achieving this a PC based system is designed and developed. A front end analog to digital convertor (12 bit) is built and interfaced to a PC. Software to control the ADC and to perform various analytical functions including feature vector evaluation is developed. It is shown that a fixed set of phrases incorporating evenly balanced phonemes is aptly suited for the speaker recognition work at hand. A set of phrases are chosen for recognition. Two new methods are adopted for the feature evaluation. Some new measurements involving a symmetry check method for pitch period detection and ACE‘ are used as featured. Arguments are provided to show the need for a new model for speech production. Starting from heuristic, a knowledge based (KB) speech production model is presented. In this model, a KB provides impulses to a voice producing mechanism and constant correction is applied via a feedback path. It is this correction that differs from speaker to speaker. Methods of defining measurable parameters for use as features are described. Algorithms for speaker recognition are developed and implemented. Two methods are presented. The first is based on the model postulated. Here the entropy on the utterance of a phoneme is evaluated. The transitions of voiced regions are used as speaker dependent features. The second method presented uses features found in other works, but evaluated differently. A knock—out scheme is used to provide the weightage values for the selection of features. Results of implementation are presented which show on an average of 80% recognition. It is also shown that if there are long gaps between sessions, the performance deteriorates and is speaker dependent. Cross recognition percentages are also presented and this in the worst case rises to 30% while the best case is 0%. Suggestions for further work are given in the concluding chapter.
Resumo:
Object recognition is complicated by clutter, occlusion, and sensor error. Since pose hypotheses are based on image feature locations, these effects can lead to false negatives and positives. In a typical recognition algorithm, pose hypotheses are tested against the image, and a score is assigned to each hypothesis. We use a statistical model to determine the score distribution associated with correct and incorrect pose hypotheses, and use binary hypothesis testing techniques to distinguish between them. Using this approach we can compare algorithms and noise models, and automatically choose values for internal system thresholds to minimize the probability of making a mistake.
Resumo:
In the accounting literature, interaction or moderating effects are usually assessed by means of OLS regression and summated rating scales are constructed to reduce measurement error bias. Structural equation models and two-stage least squares regression could be used to completely eliminate this bias, but large samples are needed. Partial Least Squares are appropriate for small samples but do not correct measurement error bias. In this article, disattenuated regression is discussed as a small sample alternative and is illustrated on data of Bisbe and Otley (in press) that examine the interaction effect of innovation and style of use of budgets on performance. Sizeable differences emerge between OLS and disattenuated regression
Resumo:
Interaction effects are usually modeled by means of moderated regression analysis. Structural equation models with non-linear constraints make it possible to estimate interaction effects while correcting for measurement error. From the various specifications, Jöreskog and Yang's (1996, 1998), likely the most parsimonious, has been chosen and further simplified. Up to now, only direct effects have been specified, thus wasting much of the capability of the structural equation approach. This paper presents and discusses an extension of Jöreskog and Yang's specification that can handle direct, indirect and interaction effects simultaneously. The model is illustrated by a study of the effects of an interactive style of use of budgets on both company innovation and performance
Resumo:
The relevance of the fragment relaxation energy term and the effect of the basis set superposition error on the geometry of the BF3⋯NH3 and C2H4⋯SO2 van der Waals dimers have been analyzed. Second-order Møller-Plesset perturbation theory calculations with the d95(d,p) basis set have been used to calculate the counterpoise-corrected barrier height for the internal rotations. These barriers have been obtained by relocating the stationary points on the counterpoise-corrected potential energy surface of the processes involved. The fragment relaxation energy can have a large influence on both the intermolecular parameters and barrier height. The counterpoise correction has proved to be important for these systems
Resumo:
The effect of basis set superposition error (BSSE) on molecular complexes is analyzed. The BSSE causes artificial delocalizations which modify the first order electron density. The mechanism of this effect is assessed for the hydrogen fluoride dimer with several basis sets. The BSSE-corrected first-order electron density is obtained using the chemical Hamiltonian approach versions of the Roothaan and Kohn-Sham equations. The corrected densities are compared to uncorrected densities based on the charge density critical points. Contour difference maps between BSSE-corrected and uncorrected densities on the molecular plane are also plotted to gain insight into the effects of BSSE correction on the electron density
Resumo:
Geometries, vibrational frequencies, and interaction energies of the CNH⋯O3 and HCCH⋯O3 complexes are calculated in a counterpoise-corrected (CP-corrected) potential-energy surface (PES) that corrects for the basis set superposition error (BSSE). Ab initio calculations are performed at the Hartree-Fock (HF) and second-order Møller-Plesset (MP2) levels, using the 6-31G(d,p) and D95++(d,p) basis sets. Interaction energies are presented including corrections for zero-point vibrational energy (ZPVE) and thermal correction to enthalpy at 298 K. The CP-corrected and conventional PES are compared; the unconnected PES obtained using the larger basis set including diffuse functions exhibits a double well shape, whereas use of the 6-31G(d,p) basis set leads to a flat single-well profile. The CP-corrected PES has always a multiple-well shape. In particular, it is shown that the CP-corrected PES using the smaller basis set is qualitatively analogous to that obtained with the larger basis sets, so the CP method becomes useful to correctly describe large systems, where the use of small basis sets may be necessary
Resumo:
We describe a simple method to automate the geometric optimization of molecular orbital calculations of supermolecules on potential surfaces that are corrected for basis set superposition error using the counterpoise (CP) method. This method is applied to the H-bonding complexes HF/HCN, HF/H2O, and HCCH/H2O using the 6-31G(d,p) and D95 + + (d,p) basis sets at both the Hartree-Fock and second-order Møller-Plesset levels. We report the interaction energies, geometries, and vibrational frequencies of these complexes on the CP-optimized surfaces; and compare them with similar values calculated using traditional methods, including the (more traditional) single point CP correction. Upon optimization on the CP-corrected surface, the interaction energies become more negative (before vibrational corrections) and the H-bonding stretching vibrations decrease in all cases. The extent of the effects vary from extremely small to quite large depending on the complex and the calculational method. The relative magnitudes of the vibrational corrections cannot be predicted from the H-bond stretching frequencies alone
Resumo:
Comparison of donor-acceptor electronic couplings calculated within two-state and three-state models suggests that the two-state treatment can provide unreliable estimates of Vda because of neglecting the multistate effects. We show that in most cases accurate values of the electronic coupling in a π stack, where donor and acceptor are separated by a bridging unit, can be obtained as Ṽ da = (E2 - E1) μ12 Rda + (2 E3 - E1 - E2) 2 μ13 μ23 Rda2, where E1, E2, and E3 are adiabatic energies of the ground, charge-transfer, and bridge states, respectively, μij is the transition dipole moments between the states i and j, and Rda is the distance between the planes of donor and acceptor. In this expression based on the generalized Mulliken-Hush approach, the first term corresponds to the coupling derived within a two-state model, whereas the second term is the superexchange correction accounting for the bridge effect. The formula is extended to bridges consisting of several subunits. The influence of the donor-acceptor energy mismatch on the excess charge distribution, adiabatic dipole and transition moments, and electronic couplings is examined. A diagnostic is developed to determine whether the two-state approach can be applied. Based on numerical results, we showed that the superexchange correction considerably improves estimates of the donor-acceptor coupling derived within a two-state approach. In most cases when the two-state scheme fails, the formula gives reliable results which are in good agreement (within 5%) with the data of the three-state generalized Mulliken-Hush model
Resumo:
A primary interest of this thesis is to obtain a powerful tool for determining structural properties, electrical and reactivity of molecules. A second interest is the study of fundamental error based on complex overlay of bridge hydrogen. One way to correct this error, using Counterpoise correction proposed by Boys and Bernardi. Usually the Counterpoise correction is applied promptly on the geometries previously optimized. Our goal was to find areas of potential which had all the points fixed with CP. These surfaces have a minimum corresponding to a surface other than corrected, ie, the geometric parameters will be different. The curvature of this minimum will also be different, therefore the vibrational frequency will also change when they are corrected with BSSE. Once constructed these surfaces have been studied various complex. It has also been investigated as the method for calculating the error influenced on the basis superposition.
Resumo:
A new formulation of a pose refinement technique using ``active'' models is described. An error term derived from the detection of image derivatives close to an initial object hypothesis is linearised and solved by least squares. The method is particularly well suited to problems involving external geometrical constraints (such as the ground-plane constraint). We show that the method is able to recover both the pose of a rigid model, and the structure of a deformable model. We report an initial assessment of the performance and cost of pose and structure recovery using the active model in comparison with our previously reported ``passive'' model-based techniques in the context of traffic surveillance. The new method is more stable, and requires fewer iterations, especially when the number of free parameters increases, but shows somewhat poorer convergence.
Resumo:
The impact of systematic model errors on a coupled simulation of the Asian Summer monsoon and its interannual variability is studied. Although the mean monsoon climate is reasonably well captured, systematic errors in the equatorial Pacific mean that the monsoon-ENSO teleconnection is rather poorly represented in the GCM. A system of ocean-surface heat flux adjustments is implemented in the tropical Pacific and Indian Oceans in order to reduce the systematic biases. In this version of the GCM, the monsoon-ENSO teleconnection is better simulated, particularly the lag-lead relationships in which weak monsoons precede the peak of El Nino. In part this is related to changes in the characteristics of El Nino, which has a more realistic evolution in its developing phase. A stronger ENSO amplitude in the new model version also feeds back to further strengthen the teleconnection. These results have important implications for the use of coupled models for seasonal prediction of systems such as the monsoon, and suggest that some form of flux correction may have significant benefits where model systematic error compromises important teleconnections and modes of interannual variability.
Resumo:
This paper describes benchmark testing of six two-dimensional (2D) hydraulic models (DIVAST, DIVASTTVD, TUFLOW, JFLOW, TRENT and LISFLOOD-FP) in terms of their ability to simulate surface flows in a densely urbanised area. The models are applied to a 1·0 km × 0·4 km urban catchment within the city of Glasgow, Scotland, UK, and are used to simulate a flood event that occurred at this site on 30 July 2002. An identical numerical grid describing the underlying topography is constructed for each model, using a combination of airborne laser altimetry (LiDAR) fused with digital map data, and used to run a benchmark simulation. Two numerical experiments were then conducted to test the response of each model to topographic error and uncertainty over friction parameterisation. While all the models tested produce plausible results, subtle differences between particular groups of codes give considerable insight into both the practice and science of urban hydraulic modelling. In particular, the results show that the terrain data available from modern LiDAR systems are sufficiently accurate and resolved for simulating urban flows, but such data need to be fused with digital map data of building topology and land use to gain maximum benefit from the information contained therein. When such terrain data are available, uncertainty in friction parameters becomes a more dominant factor than topographic error for typical problems. The simulations also show that flows in urban environments are characterised by numerous transitions to supercritical flow and numerical shocks. However, the effects of these are localised and they do not appear to affect overall wave propagation. In contrast, inertia terms are shown to be important in this particular case, but the specific characteristics of the test site may mean that this does not hold more generally.