954 resultados para Reasonable Accommodation


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Alawariwa beels located in the flood plains of the Ogun River, off Ibafo in Owode/Obafemi Local Government Area of Ogun State number 16 with an approximate total surface area of 28.0 hectares. The beels are conveniently exploited between January and April annually when the dry season and riverine contraction make this possible. The daily landing showed that the fish enclosure is truly a natural fisheries reserve as well as a medium of biodiversity. Fish catch per unit effort is reasonable especially for the more abundant fish species. The beel is sufficiently productive and worthy of the fishing efforts of the fishing efforts of eight fishers undertaking the daily assignment. Beel fishing is therefore economically advisable for fishers having access to such valuable communal or individual natural wetland resources

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis, a method to retrieve the source finiteness, depth of faulting, and the mechanisms of large earthquakes from long-period surface waves is developed and applied to several recent large events.

In Chapter 1, source finiteness parameters of eleven large earthquakes were determined from long-period Rayleigh waves recorded at IDA and GDSN stations. The basic data set is the seismic spectra of periods from 150 to 300 sec. Two simple models of source finiteness are studied. The first model is a point source with finite duration. In the determination of the duration or source-process times, we used Furumoto's phase method and a linear inversion method, in which we simultaneously inverted the spectra and determined the source-process time that minimizes the error in the inversion. These two methods yielded consistent results. The second model is the finite fault model. Source finiteness of large shallow earthquakes with rupture on a fault plane with a large aspect ratio was modeled with the source-finiteness function introduced by Ben-Menahem. The spectra were inverted to find the extent and direction of the rupture of the earthquake that minimize the error in the inversion. This method is applied to the 1977 Sumbawa, Indonesia, 1979 Colombia-Ecuador, 1983 Akita-Oki, Japan, 1985 Valparaiso, Chile, and 1985 Michoacan, Mexico earthquakes. The method yielded results consistent with the rupture extent inferred from the aftershock area of these earthquakes.

In Chapter 2, the depths and source mechanisms of nine large shallow earthquakes were determined. We inverted the data set of complex source spectra for a moment tensor (linear) or a double couple (nonlinear). By solving a least-squares problem, we obtained the centroid depth or the extent of the distributed source for each earthquake. The depths and source mechanisms of large shallow earthquakes determined from long-period Rayleigh waves depend on the models of source finiteness, wave propagation, and the excitation. We tested various models of the source finiteness, Q, the group velocity, and the excitation in the determination of earthquake depths.

The depth estimates obtained using the Q model of Dziewonski and Steim (1982) and the excitation functions computed for the average ocean model of Regan and Anderson (1984) are considered most reasonable. Dziewonski and Steim's Q model represents a good global average of Q determined over a period range of the Rayleigh waves used in this study. Since most of the earthquakes studied here occurred in subduction zones Regan and Anderson's average ocean model is considered most appropriate.

Our depth estimates are in general consistent with the Harvard CMT solutions. The centroid depths and their 90 % confidence intervals (numbers in the parentheses) determined by the Student's t test are: Colombia-Ecuador earthquake (12 December 1979), d = 11 km, (9, 24) km; Santa Cruz Is. earthquake (17 July 1980), d = 36 km, (18, 46) km; Samoa earthquake (1 September 1981), d = 15 km, (9, 26) km; Playa Azul, Mexico earthquake (25 October 1981), d = 41 km, (28, 49) km; El Salvador earthquake (19 June 1982), d = 49 km, (41, 55) km; New Ireland earthquake (18 March 1983), d = 75 km, (72, 79) km; Chagos Bank earthquake (30 November 1983), d = 31 km, (16, 41) km; Valparaiso, Chile earthquake (3 March 1985), d = 44 km, (15, 54) km; Michoacan, Mexico earthquake (19 September 1985), d = 24 km, (12, 34) km.

In Chapter 3, the vertical extent of faulting of the 1983 Akita-Oki, and 1977 Sumbawa, Indonesia earthquakes are determined from fundamental and overtone Rayleigh waves. Using fundamental Rayleigh waves, the depths are determined from the moment tensor inversion and fault inversion. The observed overtone Rayleigh waves are compared to the synthetic overtone seismograms to estimate the depth of faulting of these earthquakes. The depths obtained from overtone Rayleigh waves are consistent with the depths determined from fundamental Rayleigh waves for the two earthquakes. Appendix B gives the observed seismograms of fundamental and overtone Rayleigh waves for eleven large earthquakes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Consider a sphere immersed in a rarefied monatomic gas with zero mean flow. The distribution function of the molecules at infinity is chosen to be a Maxwellian. The boundary condition at the body is diffuse reflection with perfect accommodation to the surface temperature. The microscopic flow of particles about the sphere is modeled kinetically by the Boltzmann equation with the Krook collision term. Appropriate normalizations in the near and far fields lead to a perturbation solution of the problem, expanded in terms of the ratio of body diameter to mean free path (inverse Knudsen number). The distribution function is found directly in each region, and intermediate matching is demonstrated. The heat transfer from the sphere is then calculated as an integral over this distribution function in the inner region. Final results indicate that the heat transfer may at first increase over its free flow value before falling to the continuum level.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the following singularly perturbed linear two-point boundary-value problem:

Ly(x) ≡ Ω(ε)D_xy(x) - A(x,ε)y(x) = f(x,ε) 0≤x≤1 (1a)

By ≡ L(ε)y(0) + R(ε)y(1) = g(ε) ε → 0^+ (1b)

Here Ω(ε) is a diagonal matrix whose first m diagonal elements are 1 and last m elements are ε. Aside from reasonable continuity conditions placed on A, L, R, f, g, we assume the lower right mxm principle submatrix of A has no eigenvalues whose real part is zero. Under these assumptions a constructive technique is used to derive sufficient conditions for the existence of a unique solution of (1). These sufficient conditions are used to define when (1) is a regular problem. It is then shown that as ε → 0^+ the solution of a regular problem exists and converges on every closed subinterval of (0,1) to a solution of the reduced problem. The reduced problem consists of the differential equation obtained by formally setting ε equal to zero in (1a) and initial conditions obtained from the boundary conditions (1b). Several examples of regular problems are also considered.

A similar technique is used to derive the properties of the solution of a particular difference scheme used to approximate (1). Under restrictions on the boundary conditions (1b) it is shown that for the stepsize much larger than ε the solution of the difference scheme, when applied to a regular problem, accurately represents the solution of the reduced problem.

Furthermore, the existence of a similarity transformation which block diagonalizes a matrix is presented as well as exponential bounds on certain fundamental solution matrices associated with the problem (1).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Part I.

We have developed a technique for measuring the depth time history of rigid body penetration into brittle materials (hard rocks and concretes) under a deceleration of ~ 105 g. The technique includes bar-coded projectile, sabot-projectile separation, detection and recording systems. Because the technique can give very dense data on penetration depth time history, penetration velocity can be deduced. Error analysis shows that the technique has a small intrinsic error of ~ 3-4 % in time during penetration, and 0.3 to 0.7 mm in penetration depth. A series of 4140 steel projectile penetration into G-mixture mortar targets have been conducted using the Caltech 40 mm gas/ powder gun in the velocity range of 100 to 500 m/s.

We report, for the first time, the whole depth-time history of rigid body penetration into brittle materials (the G-mixture mortar) under 105 g deceleration. Based on the experimental results, including penetration depth time history, damage of recovered target and projectile materials and theoretical analysis, we find:

1. Target materials are damaged via compacting in the region in front of a projectile and via brittle radial and lateral crack propagation in the region surrounding the penetration path. The results suggest that expected cracks in front of penetrators may be stopped by a comminuted region that is induced by wave propagation. Aggregate erosion on the projectile lateral surface is < 20% of the final penetration depth. This result suggests that the effect of lateral friction on the penetration process can be ignored.

2. Final penetration depth, Pmax, is linearly scaled with initial projectile energy per unit cross-section area, es , when targets are intact after impact. Based on the experimental data on the mortar targets, the relation is Pmax(mm) 1.15es (J/mm2 ) + 16.39.

3. Estimation of the energy needed to create an unit penetration volume suggests that the average pressure acting on the target material during penetration is ~ 10 to 20 times higher than the unconfined strength of target materials under quasi-static loading, and 3 to 4 times higher than the possible highest pressure due to friction and material strength and its rate dependence. In addition, the experimental data show that the interaction between cracks and the target free surface significantly affects the penetration process.

4. Based on the fact that the penetration duration, tmax, increases slowly with es and does not depend on projectile radius approximately, the dependence of tmax on projectile length is suggested to be described by tmax(μs) = 2.08es (J/mm2 + 349.0 x m/(πR2), in which m is the projectile mass in grams and R is the projectile radius in mm. The prediction from this relation is in reasonable agreement with the experimental data for different projectile lengths.

5. Deduced penetration velocity time histories suggest that whole penetration history is divided into three stages: (1) An initial stage in which the projectile velocity change is small due to very small contact area between the projectile and target materials; (2) A steady penetration stage in which projectile velocity continues to decrease smoothly; (3) A penetration stop stage in which projectile deceleration jumps up when velocities are close to a critical value of ~ 35 m/s.

6. Deduced averaged deceleration, a, in the steady penetration stage for projectiles with same dimensions is found to be a(g) = 192.4v + 1.89 x 104, where v is initial projectile velocity in m/s. The average pressure acting on target materials during penetration is estimated to be very comparable to shock wave pressure.

7. A similarity of penetration process is found to be described by a relation between normalized penetration depth, P/Pmax, and normalized penetration time, t/tmax, as P/Pmax = f(t/tmax, where f is a function of t/tmax. After f(t/tmax is determined using experimental data for projectiles with 150 mm length, the penetration depth time history for projectiles with 100 mm length predicted by this relation is in good agreement with experimental data. This similarity also predicts that average deceleration increases with decreasing projectile length, that is verified by the experimental data.

8. Based on the penetration process analysis and the present data, a first principle model for rigid body penetration is suggested. The model incorporates the models for contact area between projectile and target materials, friction coefficient, penetration stop criterion, and normal stress on the projectile surface. The most important assumptions used in the model are: (1) The penetration process can be treated as a series of impact events, therefore, pressure normal to projectile surface is estimated using the Hugoniot relation of target material; (2) The necessary condition for penetration is that the pressure acting on target materials is not lower than the Hugoniot elastic limit; (3) The friction force on projectile lateral surface can be ignored due to cavitation during penetration. All the parameters involved in the model are determined based on independent experimental data. The penetration depth time histories predicted from the model are in good agreement with the experimental data.

9. Based on planar impact and previous quasi-static experimental data, the strain rate dependence of the mortar compressive strength is described by σf0f = exp(0.0905(log(έ/έ_0) 1.14, in the strain rate range of 10-7/s to 103/s (σ0f and έ are reference compressive strength and strain rate, respectively). The non-dispersive Hugoniot elastic wave in the G-mixture has an amplitude of ~ 0.14 GPa and a velocity of ~ 4.3 km/s.

Part II.

Stress wave profiles in vitreous GeO2 were measured using piezoresistance gauges in the pressure range of 5 to 18 GPa under planar plate and spherical projectile impact. Experimental data show that the response of vitreous GeO2 to planar shock loading can be divided into three stages: (1) A ramp elastic precursor has peak amplitude of 4 GPa and peak particle velocity of 333 m/s. Wave velocity decreases from initial longitudinal elastic wave velocity of 3.5 km/s to 2.9 km/s at 4 GPa; (2) A ramp wave with amplitude of 2.11 GPa follows the precursor when peak loading pressure is 8.4 GPa. Wave velocity drops to the value below bulk wave velocity in this stage; (3) A shock wave achieving final shock state forms when peak pressure is > 6 GPa. The Hugoniot relation is D = 0.917 + 1.711u (km/s) using present data and the data of Jackson and Ahrens [1979] when shock wave pressure is between 6 and 40 GPa for ρ0 = 3.655 gj cm3 . Based on the present data, the phase change from 4-fold to 6-fold coordination of Ge+4 with O-2 in vitreous GeO2 occurs in the pressure range of 4 to 15 ± 1 GPa under planar shock loading. Comparison of the shock loading data for fused SiO2 to that on vitreous GeO2 demonstrates that transformation to the rutile structure in both media are similar. The Hugoniots of vitreous GeO2 and fused SiO2 are found to coincide approximately if pressure in fused SiO2 is scaled by the ratio of fused SiO2to vitreous GeO2 density. This result, as well as the same structure, provides the basis for considering vitreous Ge02 as an analogous material to fused SiO2 under shock loading. Experimental results from the spherical projectile impact demonstrate: (1) The supported elastic shock in fused SiO2 decays less rapidly than a linear elastic wave when elastic wave stress amplitude is higher than 4 GPa. The supported elastic shock in vitreous GeO2 decays faster than a linear elastic wave; (2) In vitreous GeO2 , unsupported shock waves decays with peak pressure in the phase transition range (4-15 GPa) with propagation distance, x, as α 1/x-3.35 , close to the prediction of Chen et al. [1998]. Based on a simple analysis on spherical wave propagation, we find that the different decay rates of a spherical elastic wave in fused SiO2 and vitreous GeO2 is predictable on the base of the compressibility variation with stress under one-dimensional strain condition in the two materials.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Generally, wetlands are thought to perform water purification functions, removing contaminants as water flows through sediment and vegetation. This paradigm was challenged when Grant et al. (2001) reported that Talbert Salt Marsh (Figure 1.) increased fecal indicator bacteria (FIB) output to coastal waters, contributing to poor coastal water quality. Like most southern California wetlands, Talbert Salt Marsh has been severely degraded. It is a small (10 ha), restored wetland, only 1/100th its original size, and located at the base of a highly urbanized watershed. Is it reasonable to expect that this or any severely altered wetland will perform the same water purification benefits as a natural wetland? To determine how a more pristine southern California coastal wetland attenuated bacterial contaminants, we investigated FIB concentrations entering and exiting Carpinteria Salt Marsh (Figure 2.), a 93 ha, moderate-sized, relatively natural wetland.(PDF contains 4 pages)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Quantum computing offers powerful new techniques for speeding up the calculation of many classically intractable problems. Quantum algorithms can allow for the efficient simulation of physical systems, with applications to basic research, chemical modeling, and drug discovery; other algorithms have important implications for cryptography and internet security.

At the same time, building a quantum computer is a daunting task, requiring the coherent manipulation of systems with many quantum degrees of freedom while preventing environmental noise from interacting too strongly with the system. Fortunately, we know that, under reasonable assumptions, we can use the techniques of quantum error correction and fault tolerance to achieve an arbitrary reduction in the noise level.

In this thesis, we look at how additional information about the structure of noise, or "noise bias," can improve or alter the performance of techniques in quantum error correction and fault tolerance. In Chapter 2, we explore the possibility of designing certain quantum gates to be extremely robust with respect to errors in their operation. This naturally leads to structured noise where certain gates can be implemented in a protected manner, allowing the user to focus their protection on the noisier unprotected operations.

In Chapter 3, we examine how to tailor error-correcting codes and fault-tolerant quantum circuits in the presence of dephasing biased noise, where dephasing errors are far more common than bit-flip errors. By using an appropriately asymmetric code, we demonstrate the ability to improve the amount of error reduction and decrease the physical resources required for error correction.

In Chapter 4, we analyze a variety of protocols for distilling magic states, which enable universal quantum computation, in the presence of faulty Clifford operations. Here again there is a hierarchy of noise levels, with a fixed error rate for faulty gates, and a second rate for errors in the distilled states which decreases as the states are distilled to better quality. The interplay of of these different rates sets limits on the achievable distillation and how quickly states converge to that limit.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Despite the complexity of biological networks, we find that certain common architectures govern network structures. These architectures impose fundamental constraints on system performance and create tradeoffs that the system must balance in the face of uncertainty in the environment. This means that while a system may be optimized for a specific function through evolution, the optimal achievable state must follow these constraints. One such constraining architecture is autocatalysis, as seen in many biological networks including glycolysis and ribosomal protein synthesis. Using a minimal model, we show that ATP autocatalysis in glycolysis imposes stability and performance constraints and that the experimentally well-studied glycolytic oscillations are in fact a consequence of a tradeoff between error minimization and stability. We also show that additional complexity in the network results in increased robustness. Ribosome synthesis is also autocatalytic where ribosomes must be used to make more ribosomal proteins. When ribosomes have higher protein content, the autocatalysis is increased. We show that this autocatalysis destabilizes the system, slows down response, and also constrains the system’s performance. On a larger scale, transcriptional regulation of whole organisms also follows architectural constraints and this can be seen in the differences between bacterial and yeast transcription networks. We show that the degree distributions of bacterial transcription network follow a power law distribution while the yeast network follows an exponential distribution. We then explored the evolutionary models that have previously been proposed and show that neither the preferential linking model nor the duplication-divergence model of network evolution generates the power-law, hierarchical structure found in bacteria. However, in real biological systems, the generation of new nodes occurs through both duplication and horizontal gene transfers, and we show that a biologically reasonable combination of the two mechanisms generates the desired network.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The effect of the laser spot size on the neutron yield of table-top nuclear fusion from explosions of a femtosecond intense laser pulse heated deuterium clusters is investigated by using a simplified model, in which the cluster size distribution and the energy attenuation of the laser as it propagates through the cluster jet are taken into account. It has been found that there exists a proper laser spot size for the maximum fusion neutron yield for a given laser pulse and a specific deuterium gas cluster jet. The proper spot size, which is dependent on the laser parameters and the cluster jet parameters, has been calculated and compared with the available experimental data. A reasonable agreement between the calculated results and the published experimental results is found.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Commercially available software packages for IBM PC-compatibles are evaluated to use for data acquisition and processing work. Moss Landing Marine Laboratories (MLML) acquired computers since 1978 to use on shipboard data acquisition (Le. CTD, radiometric, etc.) and data processing. First Hewlett-Packard desktops were used then a transition to the DEC VAXstations, with software developed mostly by the author and others at MLML (Broenkow and Reaves, 1993; Feinholz and Broenkow, 1993; Broenkow et al, 1993). IBM PC were at first very slow and limited in available software, so they were not used in the early days. Improved technology such as higher speed microprocessors and a wide range of commercially available software made use of PC more reasonable today. MLML is making a transition towards using the PC for data acquisition and processing. Advantages are portability and available outside support.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we briefly summarize two typical morphology characteristics of the self-organized void array induced in bulk of fused silica glass by a tightly focused femtosecond laser beam, such as the key role of high numerical aperture in the void array formation and the concentric-circle-like structure indicated by the top view of the void array. By adopting a physical model which combines the nonlinear propagation of femtosecond laser pulses with the spherical aberration effect (SA) at the interface of two mediums of different refractive indices, reasonable agreements between the simulation results and the experimental results are obtained. By comparing the fluence distributions of the case with both SA and nonlinear effects included and the case with only consideration of SA, we suggest that spherical aberration, which results from the refractive index mismatch between air and fused silica glass, is the main reason for the formation of the self-organized void array. (c) 2008 American Institute of Physics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis presents an experimental investigation of the axisymmetric heat transfer from a small scale fire and resulting buoyant plume to a horizontal, unobstructed ceiling during the initial stages of development. A propane-air burner yielding a heat source strength between 1.0 kW and 1.6 kW was used to simulate the fire, and measurements proved that this heat source did satisfactorily represent a source of buoyancy only. The ceiling consisted of a 1/16" steel plate of 0.91 m. diameter, insulated on the upper side. The ceiling height was adjustable between 0.5 m and 0.91 m. Temperature measurements were carried out in the plume, ceiling jet, and on the ceiling.

Heat transfer data were obtained by using the transient method and applying corrections for the radial conduction along the ceiling and losses through the insulation material. The ceiling heat transfer coefficient was based on the adiabatic ceiling jet temperature (recovery temperature) reached after a long time. A parameter involving the source strength Q and ceiling height H was found to correlate measurements of this temperature and its radial variation. A similar parameter for estimating the ceiling heat transfer coefficient was confirmed by the experimental results.

This investigation therefore provides reasonable estimates for the heat transfer from a buoyant gas plume to a ceiling in the axisymmetric case, for the stagnation region where such heat transfer is a maximum and for the ceiling jet region (r/H ≤ 0.7). A comparison with data from experiments which involved larger heat sources indicates that the predicted scaling of temperatures and heat transfer rates for larger scale fires is adequate.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The work described in this dissertation includes fundamental investigations into three surface processes, namely inorganic film growth, water-induced oxidation, and organic functionalization/passivation, on the GaP and GaAs(001) surfaces. The techniques used to carry out this work include scanning tunneling microscopy (STM), X-ray photoelectron spectroscopy (XPS), and density functional theory (DFT) calculations. Atomic structure, electronic structure, reaction mechanisms, and energetics related to these surface processes are discussed at atomic or molecular levels.

First, we investigate epitaxial Zn3P2 films grown on the Ga-rich GaAs(001)(6×6) surface. The film growth mechanism, electronic properties, and atomic structure of the Zn3P2/GaAs(001) system are discussed based on experimental and theoretical observations. We discover that a P-rich amorphous layer covers the crystalline Zn3P2 film during and after growth. We also propose more accurate picture of the GaP interfacial layer between Zn3P2 and GaAs, based on the atomic structure, chemical bonding, band diagram, and P-replacement energetics, than was previously anticipated.

Second, DFT calculations are carried out in order to understand water-induced oxidation mechanisms on the Ga-rich GaP(001)(2×4) surface. Structural and energetic information of every step in the gaseous water-induced GaP oxidation reactions are elucidated at the atomic level in great detail. We explore all reasonable ground states involved in most of the possible adsorption and decomposition pathways. We also investigate structures and energies of the transition states in the first hydrogen dissociation of a water molecule on the (2×4) surface.

Finally, adsorption structures and thermal decomposition reactions of 1-propanethiol on the Ga-rich GaP(001)(2×4) surface are investigated using high resolution STM, XPS, and DFT simulations. We elucidate adsorption locations and their associated atomic structures of a single 1-propanethiol molecule on the (2×4) surface as a function of annealing temperature. DFT calculations are carried out to optimize ground state structures and search transition states. XPS is used to investigate variations of the chemical bonding nature and coverage of the adsorbate species.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Experimental fishing trials were conducted in the Elbe Estuary using an experimental 3 m-standard beamtrawl. To avoid the by-catch of fish, a sorting grid was used. The elliptical grid was constructed of 6 mm diameter stainless steel bars with a spacing of 13 mm between the bars and housed in a cylindrical frame of 400 mm diameter. It was installed in the extension piece just in front of the codend angled at 45°, with a fish outlet at the top. A series of 10 tows of 15 minutes duration at a towing speed of 3 kn was done. The catch of the main codend was compared with the catch separated by the sorting grid. This achieved a reduction of 56 % of plaice (Pleuronectes platessa), 75 % of flounder (Platichthys flesus), 99 % of whiting (Merlangius merlangus), 94 % of cod (Gadus morhua) and 49 % of smelt (Osmerus eperlanus) with a mean loss of 43 % of shrimps (Crangon crangon). English grid trials in the Humber estuary using a flapper set or guiding funnel in front of the sorting grid device demonstrated reasonable lower escapement rates for fish and shrimps.