25 resultados para Percent


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The speciation of water in a variety of hydrous silicate glasses, including simple and rhyolitic compositions, synthesized over a range of experimental conditions with up to 11 weight percent water has been determined using infrared spectroscopy. This technique has been calibrated with a series of standard glasses and provides a precise and accurate method for determining the concentrations of molecular water and hydroxyl groups in these glasses.

For all the compositions studied, most of the water is dissolved as hydroxyl groups at total water contents less than 3-4 weight percent; at higher total water contents, molecular water becomes the dominant species. For total water contents above 3-4 weight percent, the amount of water dissolved as hydroxyl groups is approximately constant at about 2 weight percent and additional water is incorporated as molecular water. Although there are small but measurable differences in the ratio of molecular water to hydroxyl groups at a given total water content among these silicate glasses, the speciation of water is similar over this range of composition. The trends in the concentrations of the H-bearing species in the hydrous glasses included in this study are similar to those observed in other silicate glasses using either infrared or NMR spectroscopy.

The effects of pressure and temperature on the speciation of water in albitic glasses have been investigated. The ratio of molecular water to hydroxyl groups at a given total water content is independent of the pressure and temperature of equilibration for albitic glasses synthesized in rapidly quenching piston cylinder apparatus at temperatures greater than 1000°C and pressures greater than 8 kbar. For hydrous glasses quenched from melts cooled at slower rates (i.e., in internally heated or in air-quench cold seal pressure vessels), there is an increase in the ratio of molecular water to hydroxyl group content that probably reflects reequilibration of the melt to lower temperatures during slow cooling.

Molecular water and hydroxyl group concentrations in glasses provide information on the dissolution mechanisms of water in silicate liquids. Several mixing models involving homogeneous equilibria of the form H_2O + O = 20H among melt species have been explored for albitic melts. These models can account for the measured species concentrations if the effects of non-ideal behavior or mixing of polymerized units are included, or by allowing for the presence of several different types of anhydrous species.

A thermodynamic model for hydrous albitic melts has been developed based on the assumption that the activity of water in the melt is equal to the mole fraction of molecular water determined by infrared spectroscopy. This model can account for the position of the watersaturated solidus of crystalline albite, the pressure and temperature dependence of the solubility of water in albitic melt, and the volumes of hydrous albitic melts. To the extent that it is successful, this approach provides a direct link between measured species concentrations in hydrous albitic glasses and the macroscopic thermodynamic properties of the albite-water system.

The approach taken in modelling the thermodynamics of hydrous albitic melts has been generalized to other silicate compositions. Spectroscopic measurements of species concentrations in rhyolitic and simple silicate glasses quenched from melts equilibrated with water vapor provide important constraints on the thermodynamic properties of these melt-water systems. In particular, the assumption that the activity of water is equal to the mole fraction of molecular water has been tested in detail and shown to be a valid approximation for a range of hydrous silicate melts and the partial molar volume of water in these systems has been constrained. Thus, the results of this study provide a useful thermodynamic description of hydrous melts that can be readily applied to other melt-water systems for which spectroscopic measurements of the H-bearing species are available.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis consists of three separate studies of roles that black holes might play in our universe.

In the first part we formulate a statistical method for inferring the cosmological parameters of our universe from LIGO/VIRGO measurements of the gravitational waves produced by coalescing black-hole/neutron-star binaries. This method is based on the cosmological distance-redshift relation, with "luminosity distances" determined directly, and redshifts indirectly, from the gravitational waveforms. Using the current estimates of binary coalescence rates and projected "advanced" LIGO noise spectra, we conclude that by our method the Hubble constant should be measurable to within an error of a few percent. The errors for the mean density of the universe and the cosmological constant will depend strongly on the size of the universe, varying from about 10% for a "small" universe up to and beyond 100% for a "large" universe. We further study the effects of random gravitational lensing and find that it may strongly impair the determination of the cosmological constant.

In the second part of this thesis we disprove a conjecture that black holes cannot form in an early, inflationary era of our universe, because of a quantum-field-theory induced instability of the black-hole horizon. This instability was supposed to arise from the difference in temperatures of any black-hole horizon and the inflationary cosmological horizon; it was thought that this temperature difference would make every quantum state that is regular at the cosmological horizon be singular at the black-hole horizon. We disprove this conjecture by explicitly constructing a quantum vacuum state that is everywhere regular for a massless scalar field. We further show that this quantum state has all the nice thermal properties that one has come to expect of "good" vacuum states, both at the black-hole horizon and at the cosmological horizon.

In the third part of the thesis we study the evolution and implications of a hypothetical primordial black hole that might have found its way into the center of the Sun or any other solar-type star. As a foundation for our analysis, we generalize the mixing-length theory of convection to an optically thick, spherically symmetric accretion flow (and find in passing that the radial stretching of the inflowing fluid elements leads to a modification of the standard Schwarzschild criterion for convection). When the accretion is that of solar matter onto the primordial hole, the rotation of the Sun causes centrifugal hangup of the inflow near the hole, resulting in an "accretion torus" which produces an enhanced outflow of heat. We find, however, that the turbulent viscosity, which accompanies the convective transport of this heat, extracts angular momentum from the inflowing gas, thereby buffering the torus into a lower luminosity than one might have expected. As a result, the solar surface will not be influenced noticeably by the torus's luminosity until at most three days before the Sun is finally devoured by the black hole. As a simple consequence, accretion onto a black hole inside the Sun cannot be an answer to the solar neutrino puzzle.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Lake Elsinore quadrangle covers about 250 square miles and includes parts of the southwest margin of the Perris Block, the Elsinore trough, the southeastern end of the Santa Ana Mountains, and the Elsinore Mountains.

The oldest rocks consist of an assemblage of metamorphics of igneous effusive and sedimentary origin, probably, for the most part, of Triassic age. They are intruded by diorite and various hypabyssal rocks, then in turn by granitic rocks, which occupy over 40 percent of the area. Following this last igneous activity of probable Lower Cretaceous age, an extended period of sedimentation started with the deposition of the marine Upper Cretaceous Chico formation and continued during the Paloecene under alternating marine and continental conditions on the margins of the blocks. A marine regression towards the north, during the Neocene, accounts for the younger Tertiary strata in the region under consideration.

Outpouring of basalts to the southeast indicates that igneous activity was resumed toward the close of the Tertiary. The fault zone, which characterizes the Elsinor trough, marks one of the major tectonic lines of southem California. It separates the upthrown and tilted block of the Santa Ana Mountains to the south from the Perris Block to the north.

Most of the faults are normal in type and nearly parallel to the general trend of the trough, or intersect each other at an acute angle. Vertical displacements generally exceed the horizontal ones and several periods of activity are recognized.

Tilting of Tertiary and older Quaternary sediments in the trough have produced broad synclinal structures which have been modified by subsequent faulting.

Five old surfaces of erosion are exposed on the highlands.

The mineral resources of the region are mainly high-grade clay deposits and mineral waters.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis presents two different forms of the Born approximations for acoustic and elastic wavefields and discusses their application to the inversion of seismic data. The Born approximation is valid for small amplitude heterogeneities superimposed over a slowly varying background. The first method is related to frequency-wavenumber migration methods. It is shown to properly recover two independent acoustic parameters within the bandpass of the source time function of the experiment for contrasts of about 5 percent from data generated using an exact theory for flat interfaces. The independent determination of two parameters is shown to depend on the angle coverage of the medium. For surface data, the impedance profile is well recovered.

The second method explored is mathematically similar to iterative tomographic methods recently introduced in the geophysical literature. Its basis is an integral relation between the scattered wavefield and the medium parameters obtained after applying a far-field approximation to the first-order Born approximation. The Davidon-Fletcher-Powell algorithm is used since it converges faster than the steepest descent method. It consists essentially of successive backprojections of the recorded wavefield, with angular and propagation weighing coefficients for density and bulk modulus. After each backprojection, the forward problem is computed and the residual evaluated. Each backprojection is similar to a before-stack Kirchhoff migration and is therefore readily applicable to seismic data. Several examples of reconstruction for simple point scatterer models are performed. Recovery of the amplitudes of the anomalies are improved with successive iterations. Iterations also improve the sharpness of the images.

The elastic Born approximation, with the addition of a far-field approximation is shown to correspond physically to a sum of WKBJ-asymptotic scattered rays. Four types of scattered rays enter in the sum, corresponding to P-P, P-S, S-P and S-S pairs of incident-scattered rays. Incident rays propagate in the background medium, interacting only once with the scatterers. Scattered rays propagate as if in the background medium, with no interaction with the scatterers. An example of P-wave impedance inversion is performed on a VSP data set consisting of three offsets recorded in two wells.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work is divided into two independent papers.

PAPER 1.

Spall velocities were measured for nine experimental impacts into San Marcos gabbro targets. Impact velocities ranged from 1 to 6.5 km/sec. Projectiles were iron, aluminum, lead, and basalt of varying sizes. The projectile masses ranged from a 4 g lead bullet to a 0.04 g aluminum sphere. The velocities of fragments were measured from high-speed films taken of the events. The maximum spall velocity observed was 30 m/sec, or 0.56 percent of the 5.4 km/sec impact velocity. The measured velocities were compared to the spall velocities predicted by the spallation model of Melosh (1984). The compatibility between the spallation model for large planetary impacts and the results of these small scale experiments are considered in detail.

The targets were also bisected to observe the pattern of internal fractures. A series of fractures were observed, whose location coincided with the boundary between rock subjected to the peak shock compression and a theoretical "near surface zone" predicted by the spallation model. Thus, between this boundary and the free surface, the target material should receive reduced levels of compressive stress as compared to the more highly shocked region below.

PAPER 2.

Carbonate samples from the nuclear explosion crater, OAK, and a terrestrial impact crater, Meteor Crater, were analyzed for shock damage using electron para- magnetic resonance, EPR. The first series of samples for OAK Crater were obtained from six boreholes within the crater, and the second series were ejecta samples recovered from the crater floor. The degree of shock damage in the carbonate material was assessed by comparing the sample spectra to spectra of Solenhofen limestone, which had been shocked to known pressures.

The results of the OAK borehole analysis have identified a thin zone of highly shocked carbonate material underneath the crater floor. This zone has a maximum depth of approximately 200 ft below sea floor at the ground zero borehole and decreases in depth towards the crater rim. A layer of highly shocked material is also found on the surface in the vicinity of the reference bolehole, located outside the crater. This material could represent a fallout layer. The ejecta samples have experienced a range of shock pressures.

It was also demonstrated that the EPR technique is feasible for the study of terrestrial impact craters formed in carbonate bedrock. The results for the Meteor Crater analysis suggest a slight degree of shock damage present in the β member of the Kaibab Formation exposed in the crater walls.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cross sections for the photoproduction of neutral pi, eta, rho and phi mesons on hydrogen have been measured at the Stanford Linear Accelerator Center using a missing mass spectrometer technique. The data cover photon energies between 5.0 and 17.8 GeV and four momentum transfer squared t between -.12 and -1.38 (GeV/c)2.

Pion differential cross sections at lower energies show a peak at low momentum transfers, a distinctive dip and secondary maximum for t in the region -.4 to -.9 (GeV /c)2, and a smooth decrease at higher momentum transfers. As photon energy increases, the dip becomes less pronounced, in contradiction to the expectations of simple Regge theories based on the exchange of omega and B trajectories only.

Eta photoproduction was measured only below 10 GeV. The cross section has about the same magnitude as the pion production cross section, but decreases exponentially with t, showing no dip.

Rho mesons appear to be diffractively produced. The differential cross section varies approximately as exp(8.5t + 2t2). It falls slowly with energy, decreasing about 35 percent from 6 GeV to 17.8 GeV. A simple quark model relation appears to describe the data well.

Phi meson cross sections are also consistent with diffraction production. The differential cross section varies approximately as exp(4t). The cross section tends to decrease slightly with photon energy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The epidemic of HIV/AIDS in the United States is constantly changing and evolving, starting from patient zero to now an estimated 650,000 to 900,000 Americans infected. The nature and course of HIV changed dramatically with the introduction of antiretrovirals. This discourse examines many different facets of HIV from the beginning where there wasn't any treatment for HIV until the present era of highly active antiretroviral therapy (HAART). By utilizing statistical analysis of clinical data, this paper examines where we were, where we are and projections as to where treatment of HIV/AIDS is headed.

Chapter Two describes the datasets that were used for the analyses. The primary database utilized was collected by myself from an outpatient HIV clinic. The data included dates from 1984 until the present. The second database was from the Multicenter AIDS Cohort Study (MACS) public dataset. The data from the MACS cover the time between 1984 and October 1992. Comparisons are made between both datasets.

Chapter Three discusses where we were. Before the first anti-HIV drugs (called antiretrovirals) were approved, there was no treatment to slow the progression of HIV. The first generation of antiretrovirals, reverse transcriptase inhibitors such as AZT (zidovudine), DDI (didanosine), DDC (zalcitabine), and D4T (stavudine) provided the first treatment for HIV. The first clinical trials showed that these antiretrovirals had a significant impact on increasing patient survival. The trials also showed that patients on these drugs had increased CD4+ T cell counts. Chapter Three examines the distributions of CD4 T cell counts. The results show that the estimated distributions of CD4 T cell counts are distinctly non-Gaussian. Thus distributional assumptions regarding CD4 T cell counts must be taken, into account when performing analyses with this marker. The results also show the estimated CD4 T cell distributions for each disease stage: asymptomatic, symptomatic and AIDS are non-Gaussian. Interestingly, the distribution of CD4 T cell counts for the asymptomatic period is significantly below that of the CD4 T cell distribution for the uninfected population suggesting that even in patients with no outward symptoms of HIV infection, there exists high levels of immunosuppression.

Chapter Four discusses where we are at present. HIV quickly grew resistant to reverse transcriptase inhibitors which were given sequentially as mono or dual therapy. As resistance grew, the positive effects of the reverse transcriptase inhibitors on CD4 T cell counts and survival dissipated. As the old era faded a new era characterized by a new class of drugs and new technology changed the way that we treat HIV-infected patients. Viral load assays were able to quantify the levels of HIV RNA in the blood. By quantifying the viral load, one now had a faster, more direct way to test antiretroviral regimen efficacy. Protease inhibitors, which attacked a different region of HIV than reverse transcriptase inhibitors, when used in combination with other antiretroviral agents were found to dramatically and significantly reduce the HIV RNA levels in the blood. Patients also experienced significant increases in CD4 T cell counts. For the first time in the epidemic, there was hope. It was hypothesized that with HAART, viral levels could be kept so low that the immune system as measured by CD4 T cell counts would be able to recover. If these viral levels could be kept low enough, it would be possible for the immune system to eradicate the virus. The hypothesis of immune reconstitution, that is bringing CD4 T cell counts up to levels seen in uninfected patients, is tested in Chapter Four. It was found that for these patients, there was not enough of a CD4 T cell increase to be consistent with the hypothesis of immune reconstitution.

In Chapter Five, the effectiveness of long-term HAART is analyzed. Survival analysis was conducted on 213 patients on long-term HAART. The primary endpoint was presence of an AIDS defining illness. A high level of clinical failure, or progression to an endpoint, was found.

Chapter Six yields insights into where we are going. New technology such as viral genotypic testing, that looks at the genetic structure of HIV and determines where mutations have occurred, has shown that HIV is capable of producing resistance mutations that confer multiple drug resistance. This section looks at resistance issues and speculates, ceterus parabis, where the state of HIV is going. This section first addresses viral genotype and the correlates of viral load and disease progression. A second analysis looks at patients who have failed their primary attempts at HAART and subsequent salvage therapy. It was found that salvage regimens, efforts to control viral replication through the administration of different combinations of antiretrovirals, were not effective in 90 percent of the population in controlling viral replication. Thus, primary attempts at therapy offer the best change of viral suppression and delay of disease progression. Documentation of transmission of drug-resistant virus suggests that the public health crisis of HIV is far from over. Drug resistant HIV can sustain the epidemic and hamper our efforts to treat HIV infection. The data presented suggest that the decrease in the morbidity and mortality due to HIV/AIDS is transient. Deaths due to HIV will increase and public health officials must prepare for this eventuality unless new treatments become available. These results also underscore the importance of the vaccine effort.

The final chapter looks at the economic issues related to HIV. The direct and indirect costs of treating HIV/AIDS are very high. For the first time in the epidemic, there exists treatment that can actually slow disease progression. The direct costs for HAART are estimated. It is estimated that the direct lifetime costs for treating each HIV infected patient with HAART is between $353,000 to $598,000 depending on how long HAART prolongs life. If one looks at the incremental cost per year of life saved it is only $101,000. This is comparable with the incremental costs per year of life saved from coronary artery bypass surgery.

Policy makers need to be aware that although HAART can delay disease progression, it is not a cure and HIV is not over. The results presented here suggest that the decreases in the morbidity and mortality due to HIV are transient. Policymakers need to be prepared for the eventual increase in AIDS incidence and mortality. Costs associated with HIV/AIDS are also projected to increase. The cost savings seen recently have been from the dramatic decreases in the incidence of AIDS defining opportunistic infections. As patients who have been on HAART the longest start to progress to AIDS, policymakers and insurance companies will find that the cost of treating HIV/AIDS will increase.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

For some time now, the Latino voice has been gradually gaining strength in American politics, particularly in such states as California, Florida, Illinois, New York, and Texas, where large numbers of Latino immigrants have settled and large numbers of electoral votes are at stake. Yet the issues public officials in these states espouse and the laws they enact often do not coincide with the interests and preferences of Latinos. The fact that Latinos in California and elsewhere have not been able to influence the political agenda in a way that is commensurate with their numbers may reflect their failure to participate fully in the political process by first registering to vote and then consistently turning out on election day to cast their ballots.

To understand Latino voting behavior, I first examine Latino political participation in California during the ten general elections of the 1980s and 1990s, seeking to understand what percentage of the eligible Latino population registers to vote, with what political party they register, how many registered Latinos to go the polls on election day, and what factors might increase their participation in politics. To ensure that my findings are not unique to California, I also consider Latino voter registration and turnout in Texas for the five general elections of the 1990s and compare these results with my California findings.

I offer a new approach to studying Latino political participation in which I rely on county-level aggregate data, rather than on individual survey data, and employ the ecological inference method of generalized bounds. I calculate and compare Latino and white voting-age populations, registration rates, turnout rates, and party affiliation rates for California's fifty-eight counties. Then, in a secondary grouped logit analysis, I consider the factors that influence these Latino and white registration, turnout, and party affiliation rates.

I find that California Latinos register and turn out at substantially lower rates than do whites and that these rates are more volatile than those of whites. I find that Latino registration is motivated predominantly by age and education, with older and more educated Latinos being more likely to register. Motor voter legislation, which was passed to ease and simplify the registration process, has not encouraged Latino registration . I find that turnout among California's Latino voters is influenced primarily by issues, income, educational attainment, and the size of the Spanish-speaking communities in which they reside. Although language skills may be an obstacle to political participation for an individual, the number of Spanish-speaking households in a community does not encourage or discourage registration but may encourage turnout, suggesting that cultural and linguistic assimilation may not be the entire answer.

With regard to party identification, I find that Democrats can expect a steady Latino political identification rate between 50 and 60 percent, while Republicans attract 20 to 30 percent of Latino registrants. I find that education and income are the dominant factors in determining Latino political party identification, which appears to be no more volatile than that of the larger electorate.

Next, when I consider registration and turnout in Texas, I find that Latino registration rates are nearly equal to those of whites but that Texas Latino turnout rates are volatile and substantially lower than those of whites.

Low turnout rates among Latinos and the volatility of these rates may explain why Latinos in California and Texas have had little influence on the political agenda even though their numbers are large and increasing. Simply put, the voices of Latinos are little heard in the halls of government because they do not turn out consistently to cast their votes on election day.

While these findings suggest that there may not be any short-term or quick fixes to Latino participation, they also suggest that Latinos should be encouraged to participate more fully in the political process and that additional education may be one means of achieving this goal. Candidates should speak more directly to the issues that concern Latinos. Political parties should view Latinos as crossover voters rather than as potential converts. In other words, if Latinos were "a sleeping giant," they may now be a still-drowsy leviathan waiting to be wooed by either party's persuasive political messages and relevant issues.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Experimental work was performed to delineate the system of digested sludge particles and associated trace metals and also to measure the interactions of sludge with seawater. Particle-size and particle number distributions were measured with a Coulter Counter. Number counts in excess of 1012 particles per liter were found in both the City of Los Angeles Hyperion mesophilic digested sludge and the Los Angeles County Sanitation Districts (LACSD) digested primary sludge. More than 90 percent of the particles had diameters less than 10 microns.

Total and dissolved trace metals (Ag, Cd, Cr, Cu, Fe, Mn, Ni, Pb, and Zn) were measured in LACSD sludge. Manganese was the only metal whose dissolved fraction exceeded one percent of the total metal. Sedimentation experiments for several dilutions of LACSD sludge in seawater showed that the sedimentation velocities of the sludge particles decreased as the dilution factor increased. A tenfold increase in dilution shifted the sedimentation velocity distribution by an order of magnitude. Chromium, Cu, Fe, Ni, Pb, and Zn were also followed during sedimentation. To a first approximation these metals behaved like the particles.

Solids and selected trace metals (Cr, Cu, Fe, Ni, Pb, and Zn) were monitored in oxic mixtures of both Hyperion and LACSD sludges for periods of 10 to 28 days. Less than 10 percent of the filterable solids dissolved or were oxidized. Only Ni was mobilized away from the particles. The majority of the mobilization was complete in less than one day.

The experimental data of this work were combined with oceanographic, biological, and geochemical information to propose and model the discharge of digested sludge to the San Pedro and Santa Monica Basins. A hydraulic computer simulation for a round buoyant jet in a density stratified medium showed that discharges of sludge effluent mixture at depths of 730 m would rise no more than 120 m. Initial jet mixing provided dilution estimates of 450 to 2600. Sedimentation analyses indicated that the solids would reach the sediments within 10 km of the point discharge.

Mass balances on the oxidizable chemical constituents in sludge indicated that the nearly anoxic waters of the basins would become wholly anoxic as a result of proposed discharges. From chemical-equilibrium computer modeling of the sludge digester and dilutions of sludge in anoxic seawater, it was predicted that the chemistry of all trace metals except Cr and Mn will be controlled by the precipitation of metal sulfide solids. This metal speciation held for dilutions up to 3000.

The net environmental impacts of this scheme should be salutary. The trace metals in the sludge should be immobilized in the anaerobic bottom sediments of the basins. Apparently no lifeforms higher than bacteria are there to be disrupted. The proposed deep-water discharges would remove the need for potentially expensive and energy-intensive land disposal alternatives and would end the discharge to the highly productive water near the ocean surface.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The velocity of selectively-introduced edge dislocations in 99.999 percent pure copper crystals has been measured as a function of stress at temperatures from 66°K to 373°K by means of a torsion technique. The range of resolved shear stress was 0 to 15 megadynes/ cm^2 for seven temperatures (66°K, 74°K, 83°K, 123°K, 173°K, 296°K, 296°K, 373°K.

Dislocation mobility is characterized by two distinct features; (a) relatively high velocity at low stress (maximum velocities of about 9000 em/sec were realized at low temperatures), and (b) increasing velocity with decreasing temperature at constant stress.

The relation between dislocation velocity and resolved shear stress is:

v = v_o(τ_r/τ_o)^n

where v is the dislocation velocity at resolved shear stress τ_r, v_o is a constant velocity chosen equal to 2000 cm/ sec, τ_o is the resolved shear stress required to maintain velocity v_o, and n is the mobility coefficient. The experimental results indicate that τ_o decreases from 16.3 x 10^6 to 3.3 x 10^6 dynes/cm^2 and n increases from about 0.9 to 1.1 as the temperature is lowered from 296°K to 66°K.

The experimental dislocation behavior is consistent with an interpretation on the basis of phonon drag. However, the complete temperature dependence of dislocation mobility could not be closely approximated by the predictions of one or a combination of mechanisms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Isotopic fractionation due to sputtering has been investigated via a collector type experiment in which targets of known isotopic composition have been bombarded with several keV Ar+ and Xe+ ions with fluences down to 3.0x1014 ions/cm2 , believed to be the lowest fluences for which such detailed measurements have ever been made. The isotopes were sputtered onto carbon collectors and analyzed with Secondary Ion Mass Spectroscopy (SIMS.) There is clear indication of preferential effects several times that predicted by the dominant analytical theory. Results also show a fairly strong angular variation in the fractionation. The maximum effect is usually seen in the near normal direction, measured from the target surface, falling continuously, by a few percent in some cases, to a minimum in the oblique direction. Measurements have been made using Mo isotopes: 100Mo and 92Mo and a liquid metal system of In:Ga eutectic. The light isotope of Mo is found to suffer a 53 ± 5‰ (note: 1.0‰ ≡ 0.1%) enrichment in the sputtered flux in the near normal direction, compared to the steady state near normal sputtered composition, under 5.0 keV Xe+ bombardment of 3.0 x 1014 ions/cm2. In the liquid metal study only the angular dependence of the fractionation could be measured due to the lack of a well defined reference and the nature of the liquid surface, which is able to 'repair' itself during the course of a bombardment. The results show that 113In is preferentially sputtered over 115In in the near normal direction by about 8.7 ± 2.7‰ compared to the oblique direction. 69Ga, on the other hand, is sputtered preferentially over 71Ga in the oblique direction by about 13 ± 4.4‰ with respect to the near normal direction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The influence upon the basic viscous flow about two axisymmetric bodies of (i) freestream turbulence level and (ii) the injection of small amounts of a drag-reducing polymer (Polyox WSR 301) into the test model boundary layer was investigated by the schlieren flow visualization technique. The changes in the type and occurrence of cavitation inception caused by the subsequent modifications in the viscous flow were studied. A nuclei counter using the holographic technique was built to monitor freestream nuclei populations and a few preliminary tests investigating the consequences of different populations on cavitation inception were carried out.

Both test models were observed to have a laminar separation over their respective test Reynolds number ranges. The separation on one test model was found to be insensitive to freestream turbulence levels of up to 3.75 percent. The second model was found to be very susceptible having its critical velocity reduced from 30 feet per second at a 0.04 percent turbulence level to 10 feet per second at a 3.75 percent turbulence level. Cavitation tests on both models at the lowest turbulence level showed the value of the incipient cavitation number and the type of cavitation were controlled by the presence of the laminar separation. Cavitation tests on the second model at 0.65 percent turbulence level showed no change in the inception index, but the appearance of the developed cavitation was altered.

The presence of Polyox in the boundary layer resulted in a cavitation suppression comparable to that found by other investigators. The elimination of the normally occurring laminar separation on these bodies by a polymer-induced instability in the laminar boundary layer was found to be responsible for the suppression of inception.

Freestream nuclei populations at test conditions were measured and it was found that if there were many freestream gas bubbles the normally present laminar separation was elminated and travelling bubble type cavitation occurred - the value of the inception index then depended upon the nuclei population. In cases where the laminar separation was present it was found that the value of the inception index was insensitive to the free stream nuclei populations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This investigation demonstrates an application of a flexible wall nozzle for testing in a supersonic wind tunnel. It is conservative to say that the versatility of this nozzle is such that it warrants the expenditure of time to carefully engineer a nozzle and incorporate it in the wind tunnel as a permanent part of the system. The gradients in the test section were kept within one percent of the calibrated Mach number, however, the gradients occurring over the bodies tested were only ± 0.2 percent in Mach number.

The conditions existing on a finite cone with a vertex angle of 75° were investigated by considering the pressure distribution on the cone and the shape of the shock wave. The pressure distribution on the surface of the 75° cone when based on upstream conditions does not show any discontinuities at the theoretical attachment Mach number.

Both the angle of the shock wave and the pressure distribution of the 75° cone are in very close agreement with the theoretical values given in the Kopal report, (Ref. 3).

The location of the intersection of the sonic line with the surface of the cone and with the shock wave are given for the cone. The blocking characteristics of the GALCIT supersonic wind tunnel were investigated with a series of 60° cones.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We develop a method for performing one-loop calculations in finite systems that is based on using the WKB approximation for the high energy states. This approximation allows us to absorb all the counterterms analytically and thereby avoids the need for extreme numerical precision that was required by previous methods. In addition, the local approximation makes this method well suited for self-consistent calculations. We then discuss the application of relativistic mean field methods to the atomic nucleus. Self-consistent, one loop calculations in the Walecka model are performed and the role of the vacuum in this model is analyzed. This model predicts that vacuum polarization effects are responsible for up to five percent of the local nucleon density. Within this framework the possible role of strangeness degrees of freedom is studied. We find that strangeness polarization can increase the kaon-nucleus scattering cross section by ten percent. By introducing a cutoff into the model, the dependence of the model on short-distance physics, where its validity is doubtful, is calculated. The model is very sensitive to cutoffs around one GeV.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The O18/O16 ratios of coexisting minerals from a number of regionally metamorphosed rocks have been measured, using a bromine pentafluoride extraction-technique. Listed in order of their increasing tendency to concentrate O18, the minerals analyzed are magnetite, ilmenite, chlorite, biotite, garnet, hornblende, kyanite, muscovite, feldspar, and quartz. The only anomalous sequence detected occurs in a xenolith of schist, in which quartz, muscovite, biotite, and ilmenite, but not garnet, have undergone isotopic exchange with surrounding trondjemite.

With few exceptions, quartz-magnetite and quartz-ilmenite fractionations decrease with increasing metamorphic grade determined by mineral paragenesis and spatial distribution. This consistency does not apply to quartz-magnetite and quartz-ilmenite fractionations obtained from rocks in which petrographic evidence of retrogradation is present.

Whereas measured isotopic fractionations among quartz, garnet, ilmenite, and magnetite are approximately related to metamorphic grade, fractionations between these minerals and biotite or muscovite show poor correlation with grade. Variations in muscovite-biotite fractionations are relatively small. These observations are interpreted to mean that muscovite and biotite are affected by retrograde re-equilibration to a greater extent than the anhydrous minerals analyzed.

Measured quartz-ilmenite fractionations range from 12 permil in the biotite zone of central Vermont to 6.5 permil in the sillimanite-orthoclase zone of southeastern Connecticut. Analyses of natural assemblages from the kyanite and sillimanite zones suggest that equilibrium quartz-ilmenite fractionations are approximately 8 percent smaller than corresponding quartz-magnetite fractionations. Employing the quartz-magnetite geothermometer calibrated by O'Neil and Clayton (1964), a temperature of 560°C was obtained for kyanite-bearing schists from Addison County, Vermont. Extending the calibration to quartz-ilmenite fractionations, a temperature of 600°C was obtained for kyanite-schists from Shoshone County, Idaho. At these temperatures kyanite is stable only at pressures exceeding 11 kbars (Bell, 1963), corresponding to lithostatic loads of over 40 km.