12 resultados para Sequential error ratio

em CaltechTHESIS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis, a method to retrieve the source finiteness, depth of faulting, and the mechanisms of large earthquakes from long-period surface waves is developed and applied to several recent large events.

In Chapter 1, source finiteness parameters of eleven large earthquakes were determined from long-period Rayleigh waves recorded at IDA and GDSN stations. The basic data set is the seismic spectra of periods from 150 to 300 sec. Two simple models of source finiteness are studied. The first model is a point source with finite duration. In the determination of the duration or source-process times, we used Furumoto's phase method and a linear inversion method, in which we simultaneously inverted the spectra and determined the source-process time that minimizes the error in the inversion. These two methods yielded consistent results. The second model is the finite fault model. Source finiteness of large shallow earthquakes with rupture on a fault plane with a large aspect ratio was modeled with the source-finiteness function introduced by Ben-Menahem. The spectra were inverted to find the extent and direction of the rupture of the earthquake that minimize the error in the inversion. This method is applied to the 1977 Sumbawa, Indonesia, 1979 Colombia-Ecuador, 1983 Akita-Oki, Japan, 1985 Valparaiso, Chile, and 1985 Michoacan, Mexico earthquakes. The method yielded results consistent with the rupture extent inferred from the aftershock area of these earthquakes.

In Chapter 2, the depths and source mechanisms of nine large shallow earthquakes were determined. We inverted the data set of complex source spectra for a moment tensor (linear) or a double couple (nonlinear). By solving a least-squares problem, we obtained the centroid depth or the extent of the distributed source for each earthquake. The depths and source mechanisms of large shallow earthquakes determined from long-period Rayleigh waves depend on the models of source finiteness, wave propagation, and the excitation. We tested various models of the source finiteness, Q, the group velocity, and the excitation in the determination of earthquake depths.

The depth estimates obtained using the Q model of Dziewonski and Steim (1982) and the excitation functions computed for the average ocean model of Regan and Anderson (1984) are considered most reasonable. Dziewonski and Steim's Q model represents a good global average of Q determined over a period range of the Rayleigh waves used in this study. Since most of the earthquakes studied here occurred in subduction zones Regan and Anderson's average ocean model is considered most appropriate.

Our depth estimates are in general consistent with the Harvard CMT solutions. The centroid depths and their 90 % confidence intervals (numbers in the parentheses) determined by the Student's t test are: Colombia-Ecuador earthquake (12 December 1979), d = 11 km, (9, 24) km; Santa Cruz Is. earthquake (17 July 1980), d = 36 km, (18, 46) km; Samoa earthquake (1 September 1981), d = 15 km, (9, 26) km; Playa Azul, Mexico earthquake (25 October 1981), d = 41 km, (28, 49) km; El Salvador earthquake (19 June 1982), d = 49 km, (41, 55) km; New Ireland earthquake (18 March 1983), d = 75 km, (72, 79) km; Chagos Bank earthquake (30 November 1983), d = 31 km, (16, 41) km; Valparaiso, Chile earthquake (3 March 1985), d = 44 km, (15, 54) km; Michoacan, Mexico earthquake (19 September 1985), d = 24 km, (12, 34) km.

In Chapter 3, the vertical extent of faulting of the 1983 Akita-Oki, and 1977 Sumbawa, Indonesia earthquakes are determined from fundamental and overtone Rayleigh waves. Using fundamental Rayleigh waves, the depths are determined from the moment tensor inversion and fault inversion. The observed overtone Rayleigh waves are compared to the synthetic overtone seismograms to estimate the depth of faulting of these earthquakes. The depths obtained from overtone Rayleigh waves are consistent with the depths determined from fundamental Rayleigh waves for the two earthquakes. Appendix B gives the observed seismograms of fundamental and overtone Rayleigh waves for eleven large earthquakes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Part I.

We have developed a technique for measuring the depth time history of rigid body penetration into brittle materials (hard rocks and concretes) under a deceleration of ~ 105 g. The technique includes bar-coded projectile, sabot-projectile separation, detection and recording systems. Because the technique can give very dense data on penetration depth time history, penetration velocity can be deduced. Error analysis shows that the technique has a small intrinsic error of ~ 3-4 % in time during penetration, and 0.3 to 0.7 mm in penetration depth. A series of 4140 steel projectile penetration into G-mixture mortar targets have been conducted using the Caltech 40 mm gas/ powder gun in the velocity range of 100 to 500 m/s.

We report, for the first time, the whole depth-time history of rigid body penetration into brittle materials (the G-mixture mortar) under 105 g deceleration. Based on the experimental results, including penetration depth time history, damage of recovered target and projectile materials and theoretical analysis, we find:

1. Target materials are damaged via compacting in the region in front of a projectile and via brittle radial and lateral crack propagation in the region surrounding the penetration path. The results suggest that expected cracks in front of penetrators may be stopped by a comminuted region that is induced by wave propagation. Aggregate erosion on the projectile lateral surface is < 20% of the final penetration depth. This result suggests that the effect of lateral friction on the penetration process can be ignored.

2. Final penetration depth, Pmax, is linearly scaled with initial projectile energy per unit cross-section area, es , when targets are intact after impact. Based on the experimental data on the mortar targets, the relation is Pmax(mm) 1.15es (J/mm2 ) + 16.39.

3. Estimation of the energy needed to create an unit penetration volume suggests that the average pressure acting on the target material during penetration is ~ 10 to 20 times higher than the unconfined strength of target materials under quasi-static loading, and 3 to 4 times higher than the possible highest pressure due to friction and material strength and its rate dependence. In addition, the experimental data show that the interaction between cracks and the target free surface significantly affects the penetration process.

4. Based on the fact that the penetration duration, tmax, increases slowly with es and does not depend on projectile radius approximately, the dependence of tmax on projectile length is suggested to be described by tmax(μs) = 2.08es (J/mm2 + 349.0 x m/(πR2), in which m is the projectile mass in grams and R is the projectile radius in mm. The prediction from this relation is in reasonable agreement with the experimental data for different projectile lengths.

5. Deduced penetration velocity time histories suggest that whole penetration history is divided into three stages: (1) An initial stage in which the projectile velocity change is small due to very small contact area between the projectile and target materials; (2) A steady penetration stage in which projectile velocity continues to decrease smoothly; (3) A penetration stop stage in which projectile deceleration jumps up when velocities are close to a critical value of ~ 35 m/s.

6. Deduced averaged deceleration, a, in the steady penetration stage for projectiles with same dimensions is found to be a(g) = 192.4v + 1.89 x 104, where v is initial projectile velocity in m/s. The average pressure acting on target materials during penetration is estimated to be very comparable to shock wave pressure.

7. A similarity of penetration process is found to be described by a relation between normalized penetration depth, P/Pmax, and normalized penetration time, t/tmax, as P/Pmax = f(t/tmax, where f is a function of t/tmax. After f(t/tmax is determined using experimental data for projectiles with 150 mm length, the penetration depth time history for projectiles with 100 mm length predicted by this relation is in good agreement with experimental data. This similarity also predicts that average deceleration increases with decreasing projectile length, that is verified by the experimental data.

8. Based on the penetration process analysis and the present data, a first principle model for rigid body penetration is suggested. The model incorporates the models for contact area between projectile and target materials, friction coefficient, penetration stop criterion, and normal stress on the projectile surface. The most important assumptions used in the model are: (1) The penetration process can be treated as a series of impact events, therefore, pressure normal to projectile surface is estimated using the Hugoniot relation of target material; (2) The necessary condition for penetration is that the pressure acting on target materials is not lower than the Hugoniot elastic limit; (3) The friction force on projectile lateral surface can be ignored due to cavitation during penetration. All the parameters involved in the model are determined based on independent experimental data. The penetration depth time histories predicted from the model are in good agreement with the experimental data.

9. Based on planar impact and previous quasi-static experimental data, the strain rate dependence of the mortar compressive strength is described by σf0f = exp(0.0905(log(έ/έ_0) 1.14, in the strain rate range of 10-7/s to 103/s (σ0f and έ are reference compressive strength and strain rate, respectively). The non-dispersive Hugoniot elastic wave in the G-mixture has an amplitude of ~ 0.14 GPa and a velocity of ~ 4.3 km/s.

Part II.

Stress wave profiles in vitreous GeO2 were measured using piezoresistance gauges in the pressure range of 5 to 18 GPa under planar plate and spherical projectile impact. Experimental data show that the response of vitreous GeO2 to planar shock loading can be divided into three stages: (1) A ramp elastic precursor has peak amplitude of 4 GPa and peak particle velocity of 333 m/s. Wave velocity decreases from initial longitudinal elastic wave velocity of 3.5 km/s to 2.9 km/s at 4 GPa; (2) A ramp wave with amplitude of 2.11 GPa follows the precursor when peak loading pressure is 8.4 GPa. Wave velocity drops to the value below bulk wave velocity in this stage; (3) A shock wave achieving final shock state forms when peak pressure is > 6 GPa. The Hugoniot relation is D = 0.917 + 1.711u (km/s) using present data and the data of Jackson and Ahrens [1979] when shock wave pressure is between 6 and 40 GPa for ρ0 = 3.655 gj cm3 . Based on the present data, the phase change from 4-fold to 6-fold coordination of Ge+4 with O-2 in vitreous GeO2 occurs in the pressure range of 4 to 15 ± 1 GPa under planar shock loading. Comparison of the shock loading data for fused SiO2 to that on vitreous GeO2 demonstrates that transformation to the rutile structure in both media are similar. The Hugoniots of vitreous GeO2 and fused SiO2 are found to coincide approximately if pressure in fused SiO2 is scaled by the ratio of fused SiO2to vitreous GeO2 density. This result, as well as the same structure, provides the basis for considering vitreous Ge02 as an analogous material to fused SiO2 under shock loading. Experimental results from the spherical projectile impact demonstrate: (1) The supported elastic shock in fused SiO2 decays less rapidly than a linear elastic wave when elastic wave stress amplitude is higher than 4 GPa. The supported elastic shock in vitreous GeO2 decays faster than a linear elastic wave; (2) In vitreous GeO2 , unsupported shock waves decays with peak pressure in the phase transition range (4-15 GPa) with propagation distance, x, as α 1/x-3.35 , close to the prediction of Chen et al. [1998]. Based on a simple analysis on spherical wave propagation, we find that the different decay rates of a spherical elastic wave in fused SiO2 and vitreous GeO2 is predictable on the base of the compressibility variation with stress under one-dimensional strain condition in the two materials.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main theme running through these three chapters is that economic agents are often forced to respond to events that are not a direct result of their actions or other agents actions. The optimal response to these shocks will necessarily depend on agents' understanding of how these shocks arise. The economic environment in the first two chapters is analogous to the classic chain store game. In this setting, the addition of unintended trembles by the agents creates an environment better suited to reputation building. The third chapter considers the competitive equilibrium price dynamics in an overlapping generations environment when there are supply and demand shocks.

The first chapter is a game theoretic investigation of a reputation building game. A sequential equilibrium model, called the "error prone agents" model, is developed. In this model, agents believe that all actions are potentially subjected to an error process. Inclusion of this belief into the equilibrium calculation provides for a richer class of reputation building possibilities than when perfect implementation is assumed.

In the second chapter, maximum likelihood estimation is employed to test the consistency of this new model and other models with data from experiments run by other researchers that served as the basis for prominent papers in this field. The alternate models considered are essentially modifications to the standard sequential equilibrium. While some models perform quite well in that the nature of the modification seems to explain deviations from the sequential equilibrium quite well, the degree to which these modifications must be applied shows no consistency across different experimental designs.

The third chapter is a study of price dynamics in an overlapping generations model. It establishes the existence of a unique perfect-foresight competitive equilibrium price path in a pure exchange economy with a finite time horizon when there are arbitrarily many shocks to supply or demand. One main reason for the interest in this equilibrium is that overlapping generations environments are very fruitful for the study of price dynamics, especially in experimental settings. The perfect foresight assumption is an important place to start when examining these environments because it will produce the ex post socially efficient allocation of goods. This characteristic makes this a natural baseline to which other models of price dynamics could be compared.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Quantum computing offers powerful new techniques for speeding up the calculation of many classically intractable problems. Quantum algorithms can allow for the efficient simulation of physical systems, with applications to basic research, chemical modeling, and drug discovery; other algorithms have important implications for cryptography and internet security.

At the same time, building a quantum computer is a daunting task, requiring the coherent manipulation of systems with many quantum degrees of freedom while preventing environmental noise from interacting too strongly with the system. Fortunately, we know that, under reasonable assumptions, we can use the techniques of quantum error correction and fault tolerance to achieve an arbitrary reduction in the noise level.

In this thesis, we look at how additional information about the structure of noise, or "noise bias," can improve or alter the performance of techniques in quantum error correction and fault tolerance. In Chapter 2, we explore the possibility of designing certain quantum gates to be extremely robust with respect to errors in their operation. This naturally leads to structured noise where certain gates can be implemented in a protected manner, allowing the user to focus their protection on the noisier unprotected operations.

In Chapter 3, we examine how to tailor error-correcting codes and fault-tolerant quantum circuits in the presence of dephasing biased noise, where dephasing errors are far more common than bit-flip errors. By using an appropriately asymmetric code, we demonstrate the ability to improve the amount of error reduction and decrease the physical resources required for error correction.

In Chapter 4, we analyze a variety of protocols for distilling magic states, which enable universal quantum computation, in the presence of faulty Clifford operations. Here again there is a hierarchy of noise levels, with a fixed error rate for faulty gates, and a second rate for errors in the distilled states which decreases as the states are distilled to better quality. The interplay of of these different rates sets limits on the achievable distillation and how quickly states converge to that limit.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A search for dielectron decays of heavy neutral resonances has been performed using proton-proton collision data collected at √s = 7 TeV by the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) in 2011. The data sample corresponds to an integrated luminosity of 5 fb−1. The dielectron mass distribution is consistent with Standard Model (SM) predictions. An upper limit on the ratio of the cross section times branching fraction of new bosons, normalized to the cross section times branching fraction of the Z boson, is set at the 95 % confidence level. This result is translated into limits on the mass of new neutral particles at the level of 2120 GeV for the Z′ in the Sequential Standard Model, 1810 GeV for the superstring-inspired Z′ψ resonance, and 1940 (1640) GeV for Kaluza-Klein gravitons with the coupling parameter k/MPl of 0.10 (0.05).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The motion of a single Brownian particle of arbitrary size through a dilute colloidal dispersion of neutrally buoyant bath spheres of another characteristic size in a Newtonian solvent is examined in two contexts. First, the particle in question, the probe particle, is subject to a constant applied external force drawing it through the suspension as a simple model for active and nonlinear microrheology. The strength of the applied external force, normalized by the restoring forces of Brownian motion, is the Péclet number, Pe. This dimensionless quantity describes how strongly the probe is upsetting the equilibrium distribution of the bath particles. The mean motion and fluctuations in the probe position are related to interpreted quantities of an effective viscosity of the suspension. These interpreted quantities are calculated to first order in the volume fraction of bath particles and are intimately tied to the spatial distribution, or microstructure, of bath particles relative to the probe. For weak Pe, the disturbance to the equilibrium microstructure is dipolar in nature, with accumulation and depletion regions on the front and rear faces of the probe, respectively. With increasing applied force, the accumulation region compresses to form a thin boundary layer whose thickness scales with the inverse of Pe. The depletion region lengthens to form a trailing wake. The magnitude of the microstructural disturbance is found to grow with increasing bath particle size -- small bath particles in the solvent resemble a continuum with effective microviscosity given by Einstein's viscosity correction for a dilute dispersion of spheres. Large bath particles readily advect toward the minimum approach distance possible between the probe and bath particle, and the probe and bath particle pair rotating as a doublet is the primary mechanism by which the probe particle is able to move past; this is a process that slows the motion of the probe by a factor of the size ratio. The intrinsic microviscosity is found to force thin at low Péclet number due to decreasing contributions from Brownian motion, and force thicken at high Péclet number due to the increasing influence of the configuration-averaged reduction in the probe's hydrodynamic self mobility. Nonmonotonicity at finite sizes is evident in the limiting high-Pe intrinsic microviscosity plateau as a function of bath-to-probe particle size ratio. The intrinsic microviscosity is found to grow with the size ratio for very small probes even at large-but-finite Péclet numbers. However, even a small repulsive interparticle potential, that excludes lubrication interactions, can reduce this intrinsic microviscosity back to an order one quantity. The results of this active microrheology study are compared to previous theoretical studies of falling-ball and towed-ball rheometry and sedimentation and diffusion in polydisperse suspensions, and the singular limit of full hydrodynamic interactions is noted.

Second, the probe particle in question is no longer subject to a constant applied external force. Rather, the particle is considered to be a catalytically-active motor, consuming the bath reactant particles on its reactive face while passively colliding with reactant particles on its inert face. By creating an asymmetric distribution of reactant about its surface, the motor is able to diffusiophoretically propel itself with some mean velocity. The effects of finite size of the solute are examined on the leading order diffusive microstructure of reactant about the motor. Brownian and interparticle contributions to the motor velocity are computed for several interparticle interaction potential lengths and finite reactant-to-motor particle size ratios, with the dimensionless motor velocity increasing with decreasing motor size. A discussion on Brownian rotation frames the context in which these results could be applicable, and future directions are proposed which properly incorporate reactant advection at high motor velocities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study addresses the problem of obtaining reliable velocities and displacements from accelerograms, a concern which often arises in earthquake engineering. A closed-form acceleration expression with random parameters is developed to test any strong-motion accelerogram processing method. Integration of this analytical time history yields the exact velocities, displacements and Fourier spectra. Noise and truncation can also be added. A two-step testing procedure is proposed and the original Volume II routine is used as an illustration. The main sources of error are identified and discussed. Although these errors may be reduced, it is impossible to extract the true time histories from an analog or digital accelerogram because of the uncertain noise level and missing data. Based on these uncertainties, a probabilistic approach is proposed as a new accelerogram processing method. A most probable record is presented as well as a reliability interval which reflects the level of error-uncertainty introduced by the recording and digitization process. The data is processed in the frequency domain, under assumptions governing either the initial value or the temporal mean of the time histories. This new processing approach is tested on synthetic records. It induces little error and the digitization noise is adequately bounded. Filtering is intended to be kept to a minimum and two optimal error-reduction methods are proposed. The "noise filters" reduce the noise level at each harmonic of the spectrum as a function of the signal-to-noise ratio. However, the correction at low frequencies is not sufficient to significantly reduce the drifts in the integrated time histories. The "spectral substitution method" uses optimization techniques to fit spectral models of near-field, far-field or structural motions to the amplitude spectrum of the measured data. The extremes of the spectrum of the recorded data where noise and error prevail are then partly altered, but not removed, and statistical criteria provide the choice of the appropriate cutoff frequencies. This correction method has been applied to existing strong-motion far-field, near-field and structural data with promising results. Since this correction method maintains the whole frequency range of the record, it should prove to be very useful in studying the long-period dynamics of local geology and structures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Partial differential equations (PDEs) with multiscale coefficients are very difficult to solve due to the wide range of scales in the solutions. In the thesis, we propose some efficient numerical methods for both deterministic and stochastic PDEs based on the model reduction technique.

For the deterministic PDEs, the main purpose of our method is to derive an effective equation for the multiscale problem. An essential ingredient is to decompose the harmonic coordinate into a smooth part and a highly oscillatory part of which the magnitude is small. Such a decomposition plays a key role in our construction of the effective equation. We show that the solution to the effective equation is smooth, and could be resolved on a regular coarse mesh grid. Furthermore, we provide error analysis and show that the solution to the effective equation plus a correction term is close to the original multiscale solution.

For the stochastic PDEs, we propose the model reduction based data-driven stochastic method and multilevel Monte Carlo method. In the multiquery, setting and on the assumption that the ratio of the smallest scale and largest scale is not too small, we propose the multiscale data-driven stochastic method. We construct a data-driven stochastic basis and solve the coupled deterministic PDEs to obtain the solutions. For the tougher problems, we propose the multiscale multilevel Monte Carlo method. We apply the multilevel scheme to the effective equations and assemble the stiffness matrices efficiently on each coarse mesh grid. In both methods, the $\KL$ expansion plays an important role in extracting the main parts of some stochastic quantities.

For both the deterministic and stochastic PDEs, numerical results are presented to demonstrate the accuracy and robustness of the methods. We also show the computational time cost reduction in the numerical examples.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis reports on a method to improve in vitro diagnostic assays that detect immune response, with specific application to HIV-1. The inherent polyclonal diversity of the humoral immune response was addressed by using sequential in situ click chemistry to develop a cocktail of peptide-based capture agents, the components of which were raised against different, representative anti-HIV antibodies that bind to a conserved epitope of the HIV-1 envelope protein gp41. The cocktail was used to detect anti-HIV-1 antibodies from a panel of sera collected from HIV-positive patients, with improved signal-to-noise ratio relative to the gold standard commercial recombinant protein antigen. The capture agents were stable when stored as a powder for two months at temperatures close to 60°C.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fundamental studies of magnetic alignment of highly anisotropic mesostructures can enable the clean-room-free fabrication of flexible, array-based solar and electronic devices, in which preferential orientation of nano- or microwire-type objects is desired. In this study, ensembles of 100 micron long Si microwires with ferromagnetic Ni and Co coatings are oriented vertically in the presence of magnetic fields. The degree of vertical alignment and threshold field strength depend on geometric factors, such as microwire length and ferromagnetic coating thickness, as well as interfacial interactions, which are modulated by varying solvent and substrate surface chemistry. Microwire ensembles with vertical alignment over 97% within 10 degrees of normal, as measured by X-ray diffraction, are achieved over square cm scale areas and set into flexible polymer films. A force balance model has been developed as a predictive tool for magnetic alignment, incorporating magnetic torque and empirically derived surface adhesion parameters. As supported by these calculations, microwires are shown to detach from the surface and align vertically in the presence of magnetic fields on the order of 100 gauss. Microwires aligned in this manner are set into a polydimethylsiloxane film where they retain their vertical alignment after the field has been removed and can subsequently be used as a flexible solar absorber layer. Finally, these microwires arrays can be protected for use in electrochemical cells by the conformal deposition of a graphene layer.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study concerns the longitudinal dispersion of fluid particles which are initially distributed uninformly over one cross section of a uniform, steady, turbulent open channel flow. The primary focus is on developing a method to predict the rate of dispersion in a natural stream.

Taylor's method of determining a dispersion coefficient, previously applied to flow in pipes and two-dimensional open channels, is extended to a class of three-dimensional flows which have large width-to-depth ratios, and in which the velocity varies continuously with lateral cross-sectional position. Most natural streams are included. The dispersion coefficient for a natural stream may be predicted from measurements of the channel cross-sectional geometry, the cross-sectional distribution of velocity, and the overall channel shear velocity. Tracer experiments are not required.

Large values of the dimensionless dispersion coefficient D/rU* are explained by lateral variations in downstream velocity. In effect, the characteristic length of the cross section is shown to be proportional to the width, rather than the hydraulic radius. The dimensionless dispersion coefficient depends approximately on the square of the width to depth ratio.

A numerical program is given which is capable of generating the entire dispersion pattern downstream from an instantaneous point or plane source of pollutant. The program is verified by the theory for two-dimensional flow, and gives results in good agreement with laboratory and field experiments.

Both laboratory and field experiments are described. Twenty-one laboratory experiments were conducted: thirteen in two-dimensional flows, over both smooth and roughened bottoms; and eight in three-dimensional flows, formed by adding extreme side roughness to produce lateral velocity variations. Four field experiments were conducted in the Green-Duwamish River, Washington.

Both laboratory and flume experiments prove that in three-dimensional flow the dominant mechanism for dispersion is lateral velocity variation. For instance, in one laboratory experiment the dimensionless dispersion coefficient D/rU* (where r is the hydraulic radius and U* the shear velocity) was increased by a factory of ten by roughening the channel banks. In three-dimensional laboratory flow, D/rU* varied from 190 to 640, a typical range for natural streams. For each experiment, the measured dispersion coefficient agreed with that predicted by the extension of Taylor's analysis within a maximum error of 15%. For the Green-Duwamish River, the average experimentally measured dispersion coefficient was within 5% of the prediction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem motivating this investigation is that of pure axisymmetric torsion of an elastic shell of revolution. The analysis is carried out within the framework of the three-dimensional linear theory of elastic equilibrium for homogeneous, isotropic solids. The objective is the rigorous estimation of errors involved in the use of approximations based on thin shell theory.

The underlying boundary value problem is one of Neumann type for a second order elliptic operator. A systematic procedure for constructing pointwise estimates for the solution and its first derivatives is given for a general class of second-order elliptic boundary-value problems which includes the torsion problem as a special case.

The method used here rests on the construction of “energy inequalities” and on the subsequent deduction of pointwise estimates from the energy inequalities. This method removes certain drawbacks characteristic of pointwise estimates derived in some investigations of related areas.

Special interest is directed towards thin shells of constant thickness. The method enables us to estimate the error involved in a stress analysis in which the exact solution is replaced by an approximate one, and thus provides us with a means of assessing the quality of approximate solutions for axisymmetric torsion of thin shells.

Finally, the results of the present study are applied to the stress analysis of a circular cylindrical shell, and the quality of stress estimates derived here and those from a previous related publication are discussed.