17 resultados para one percent
em CaltechTHESIS
Resumo:
Experimental work was performed to delineate the system of digested sludge particles and associated trace metals and also to measure the interactions of sludge with seawater. Particle-size and particle number distributions were measured with a Coulter Counter. Number counts in excess of 1012 particles per liter were found in both the City of Los Angeles Hyperion mesophilic digested sludge and the Los Angeles County Sanitation Districts (LACSD) digested primary sludge. More than 90 percent of the particles had diameters less than 10 microns.
Total and dissolved trace metals (Ag, Cd, Cr, Cu, Fe, Mn, Ni, Pb, and Zn) were measured in LACSD sludge. Manganese was the only metal whose dissolved fraction exceeded one percent of the total metal. Sedimentation experiments for several dilutions of LACSD sludge in seawater showed that the sedimentation velocities of the sludge particles decreased as the dilution factor increased. A tenfold increase in dilution shifted the sedimentation velocity distribution by an order of magnitude. Chromium, Cu, Fe, Ni, Pb, and Zn were also followed during sedimentation. To a first approximation these metals behaved like the particles.
Solids and selected trace metals (Cr, Cu, Fe, Ni, Pb, and Zn) were monitored in oxic mixtures of both Hyperion and LACSD sludges for periods of 10 to 28 days. Less than 10 percent of the filterable solids dissolved or were oxidized. Only Ni was mobilized away from the particles. The majority of the mobilization was complete in less than one day.
The experimental data of this work were combined with oceanographic, biological, and geochemical information to propose and model the discharge of digested sludge to the San Pedro and Santa Monica Basins. A hydraulic computer simulation for a round buoyant jet in a density stratified medium showed that discharges of sludge effluent mixture at depths of 730 m would rise no more than 120 m. Initial jet mixing provided dilution estimates of 450 to 2600. Sedimentation analyses indicated that the solids would reach the sediments within 10 km of the point discharge.
Mass balances on the oxidizable chemical constituents in sludge indicated that the nearly anoxic waters of the basins would become wholly anoxic as a result of proposed discharges. From chemical-equilibrium computer modeling of the sludge digester and dilutions of sludge in anoxic seawater, it was predicted that the chemistry of all trace metals except Cr and Mn will be controlled by the precipitation of metal sulfide solids. This metal speciation held for dilutions up to 3000.
The net environmental impacts of this scheme should be salutary. The trace metals in the sludge should be immobilized in the anaerobic bottom sediments of the basins. Apparently no lifeforms higher than bacteria are there to be disrupted. The proposed deep-water discharges would remove the need for potentially expensive and energy-intensive land disposal alternatives and would end the discharge to the highly productive water near the ocean surface.
Resumo:
This investigation demonstrates an application of a flexible wall nozzle for testing in a supersonic wind tunnel. It is conservative to say that the versatility of this nozzle is such that it warrants the expenditure of time to carefully engineer a nozzle and incorporate it in the wind tunnel as a permanent part of the system. The gradients in the test section were kept within one percent of the calibrated Mach number, however, the gradients occurring over the bodies tested were only ± 0.2 percent in Mach number.
The conditions existing on a finite cone with a vertex angle of 75° were investigated by considering the pressure distribution on the cone and the shape of the shock wave. The pressure distribution on the surface of the 75° cone when based on upstream conditions does not show any discontinuities at the theoretical attachment Mach number.
Both the angle of the shock wave and the pressure distribution of the 75° cone are in very close agreement with the theoretical values given in the Kopal report, (Ref. 3).
The location of the intersection of the sonic line with the surface of the cone and with the shock wave are given for the cone. The blocking characteristics of the GALCIT supersonic wind tunnel were investigated with a series of 60° cones.
Resumo:
The first synthesis of the cembranoid natural product (±)-7,8-epoxy-4-basmen-6- one (1) is described. Key steps of the synthetic route include the cationic cyclization of the acid chloride from 15 to provide the macrocycle 16, and the photochemical transannular radical cyclization of the ester 41 to form the tricyclic product 50. Product 50 was transformed into 1 in ten steps. Transition-state molecular modeling studies were found to provide accurate predictions of the structural and stereochemical outcomes of cyclization reactions explored experimentally in the development of the synthetic route to 1. These investigations should prove valuable in the development of transannular cyclization as a strategy for synthetic simplification.
Resumo:
In the preparation of small organic paramagnets, these structures may conceptually be divided into spin-containing units (SCs) and ferromagnetic coupling units (FCs). The synthesis and direct observation of a series of hydrocarbon tetraradicals designed to test the ferromagnetic coupling ability of m-phenylene, 1,3-cyclobutane, 1,3- cyclopentane, and 2,4-adamantane (a chair 1,3-cyclohexane) using Berson TMMs and cyclobutanediyls as SCs are described. While 1,3-cyclobutane and m-phenylene are good ferromagnetic coupling units under these conditions, the ferromagnetic coupling ability of 1,3-cyclopentane is poor, and 1,3-cyclohexane is apparently an antiferromagnetic coupling unit. In addition, this is the first report of ferromagnetic coupling between the spins of localized biradical SCs.
The poor coupling of 1,3-cyclopentane has enabled a study of the variable temperature behavior of a 1,3-cyclopentane FC-based tetraradical in its triplet state. Through fitting the observed data to the usual Boltzman statistics, we have been able to determine the separation of the ground quintet and excited triplet states. From this data, we have inferred the singlet-triplet gap in 1,3-cyclopentanediyl to be 900 cal/mol, in remarkable agreement with theoretical predictions of this number.
The ability to simulate EPR spectra has been crucial to the assignments made here. A powder EPR simulation package is described that uses the Zeeman and dipolar terms to calculate powder EPR spectra for triplet and quintet states.
Methods for characterizing paramagnetic samples by SQUID magnetometry have been developed, including robust routines for data fitting and analysis. A precursor to a potentially magnetic polymer was prepared by ring-opening metathesis polymerization (ROMP), and doped samples of this polymer were studied by magnetometry. While the present results are not positive, calculations have suggested modifications in this structure which should lead to the desired behavior.
Source listings for all computer programs are given in the appendix.
Resumo:
This thesis consists of three separate studies of roles that black holes might play in our universe.
In the first part we formulate a statistical method for inferring the cosmological parameters of our universe from LIGO/VIRGO measurements of the gravitational waves produced by coalescing black-hole/neutron-star binaries. This method is based on the cosmological distance-redshift relation, with "luminosity distances" determined directly, and redshifts indirectly, from the gravitational waveforms. Using the current estimates of binary coalescence rates and projected "advanced" LIGO noise spectra, we conclude that by our method the Hubble constant should be measurable to within an error of a few percent. The errors for the mean density of the universe and the cosmological constant will depend strongly on the size of the universe, varying from about 10% for a "small" universe up to and beyond 100% for a "large" universe. We further study the effects of random gravitational lensing and find that it may strongly impair the determination of the cosmological constant.
In the second part of this thesis we disprove a conjecture that black holes cannot form in an early, inflationary era of our universe, because of a quantum-field-theory induced instability of the black-hole horizon. This instability was supposed to arise from the difference in temperatures of any black-hole horizon and the inflationary cosmological horizon; it was thought that this temperature difference would make every quantum state that is regular at the cosmological horizon be singular at the black-hole horizon. We disprove this conjecture by explicitly constructing a quantum vacuum state that is everywhere regular for a massless scalar field. We further show that this quantum state has all the nice thermal properties that one has come to expect of "good" vacuum states, both at the black-hole horizon and at the cosmological horizon.
In the third part of the thesis we study the evolution and implications of a hypothetical primordial black hole that might have found its way into the center of the Sun or any other solar-type star. As a foundation for our analysis, we generalize the mixing-length theory of convection to an optically thick, spherically symmetric accretion flow (and find in passing that the radial stretching of the inflowing fluid elements leads to a modification of the standard Schwarzschild criterion for convection). When the accretion is that of solar matter onto the primordial hole, the rotation of the Sun causes centrifugal hangup of the inflow near the hole, resulting in an "accretion torus" which produces an enhanced outflow of heat. We find, however, that the turbulent viscosity, which accompanies the convective transport of this heat, extracts angular momentum from the inflowing gas, thereby buffering the torus into a lower luminosity than one might have expected. As a result, the solar surface will not be influenced noticeably by the torus's luminosity until at most three days before the Sun is finally devoured by the black hole. As a simple consequence, accretion onto a black hole inside the Sun cannot be an answer to the solar neutrino puzzle.
Resumo:
The Lake Elsinore quadrangle covers about 250 square miles and includes parts of the southwest margin of the Perris Block, the Elsinore trough, the southeastern end of the Santa Ana Mountains, and the Elsinore Mountains.
The oldest rocks consist of an assemblage of metamorphics of igneous effusive and sedimentary origin, probably, for the most part, of Triassic age. They are intruded by diorite and various hypabyssal rocks, then in turn by granitic rocks, which occupy over 40 percent of the area. Following this last igneous activity of probable Lower Cretaceous age, an extended period of sedimentation started with the deposition of the marine Upper Cretaceous Chico formation and continued during the Paloecene under alternating marine and continental conditions on the margins of the blocks. A marine regression towards the north, during the Neocene, accounts for the younger Tertiary strata in the region under consideration.
Outpouring of basalts to the southeast indicates that igneous activity was resumed toward the close of the Tertiary. The fault zone, which characterizes the Elsinor trough, marks one of the major tectonic lines of southem California. It separates the upthrown and tilted block of the Santa Ana Mountains to the south from the Perris Block to the north.
Most of the faults are normal in type and nearly parallel to the general trend of the trough, or intersect each other at an acute angle. Vertical displacements generally exceed the horizontal ones and several periods of activity are recognized.
Tilting of Tertiary and older Quaternary sediments in the trough have produced broad synclinal structures which have been modified by subsequent faulting.
Five old surfaces of erosion are exposed on the highlands.
The mineral resources of the region are mainly high-grade clay deposits and mineral waters.
Resumo:
The epidemic of HIV/AIDS in the United States is constantly changing and evolving, starting from patient zero to now an estimated 650,000 to 900,000 Americans infected. The nature and course of HIV changed dramatically with the introduction of antiretrovirals. This discourse examines many different facets of HIV from the beginning where there wasn't any treatment for HIV until the present era of highly active antiretroviral therapy (HAART). By utilizing statistical analysis of clinical data, this paper examines where we were, where we are and projections as to where treatment of HIV/AIDS is headed.
Chapter Two describes the datasets that were used for the analyses. The primary database utilized was collected by myself from an outpatient HIV clinic. The data included dates from 1984 until the present. The second database was from the Multicenter AIDS Cohort Study (MACS) public dataset. The data from the MACS cover the time between 1984 and October 1992. Comparisons are made between both datasets.
Chapter Three discusses where we were. Before the first anti-HIV drugs (called antiretrovirals) were approved, there was no treatment to slow the progression of HIV. The first generation of antiretrovirals, reverse transcriptase inhibitors such as AZT (zidovudine), DDI (didanosine), DDC (zalcitabine), and D4T (stavudine) provided the first treatment for HIV. The first clinical trials showed that these antiretrovirals had a significant impact on increasing patient survival. The trials also showed that patients on these drugs had increased CD4+ T cell counts. Chapter Three examines the distributions of CD4 T cell counts. The results show that the estimated distributions of CD4 T cell counts are distinctly non-Gaussian. Thus distributional assumptions regarding CD4 T cell counts must be taken, into account when performing analyses with this marker. The results also show the estimated CD4 T cell distributions for each disease stage: asymptomatic, symptomatic and AIDS are non-Gaussian. Interestingly, the distribution of CD4 T cell counts for the asymptomatic period is significantly below that of the CD4 T cell distribution for the uninfected population suggesting that even in patients with no outward symptoms of HIV infection, there exists high levels of immunosuppression.
Chapter Four discusses where we are at present. HIV quickly grew resistant to reverse transcriptase inhibitors which were given sequentially as mono or dual therapy. As resistance grew, the positive effects of the reverse transcriptase inhibitors on CD4 T cell counts and survival dissipated. As the old era faded a new era characterized by a new class of drugs and new technology changed the way that we treat HIV-infected patients. Viral load assays were able to quantify the levels of HIV RNA in the blood. By quantifying the viral load, one now had a faster, more direct way to test antiretroviral regimen efficacy. Protease inhibitors, which attacked a different region of HIV than reverse transcriptase inhibitors, when used in combination with other antiretroviral agents were found to dramatically and significantly reduce the HIV RNA levels in the blood. Patients also experienced significant increases in CD4 T cell counts. For the first time in the epidemic, there was hope. It was hypothesized that with HAART, viral levels could be kept so low that the immune system as measured by CD4 T cell counts would be able to recover. If these viral levels could be kept low enough, it would be possible for the immune system to eradicate the virus. The hypothesis of immune reconstitution, that is bringing CD4 T cell counts up to levels seen in uninfected patients, is tested in Chapter Four. It was found that for these patients, there was not enough of a CD4 T cell increase to be consistent with the hypothesis of immune reconstitution.
In Chapter Five, the effectiveness of long-term HAART is analyzed. Survival analysis was conducted on 213 patients on long-term HAART. The primary endpoint was presence of an AIDS defining illness. A high level of clinical failure, or progression to an endpoint, was found.
Chapter Six yields insights into where we are going. New technology such as viral genotypic testing, that looks at the genetic structure of HIV and determines where mutations have occurred, has shown that HIV is capable of producing resistance mutations that confer multiple drug resistance. This section looks at resistance issues and speculates, ceterus parabis, where the state of HIV is going. This section first addresses viral genotype and the correlates of viral load and disease progression. A second analysis looks at patients who have failed their primary attempts at HAART and subsequent salvage therapy. It was found that salvage regimens, efforts to control viral replication through the administration of different combinations of antiretrovirals, were not effective in 90 percent of the population in controlling viral replication. Thus, primary attempts at therapy offer the best change of viral suppression and delay of disease progression. Documentation of transmission of drug-resistant virus suggests that the public health crisis of HIV is far from over. Drug resistant HIV can sustain the epidemic and hamper our efforts to treat HIV infection. The data presented suggest that the decrease in the morbidity and mortality due to HIV/AIDS is transient. Deaths due to HIV will increase and public health officials must prepare for this eventuality unless new treatments become available. These results also underscore the importance of the vaccine effort.
The final chapter looks at the economic issues related to HIV. The direct and indirect costs of treating HIV/AIDS are very high. For the first time in the epidemic, there exists treatment that can actually slow disease progression. The direct costs for HAART are estimated. It is estimated that the direct lifetime costs for treating each HIV infected patient with HAART is between $353,000 to $598,000 depending on how long HAART prolongs life. If one looks at the incremental cost per year of life saved it is only $101,000. This is comparable with the incremental costs per year of life saved from coronary artery bypass surgery.
Policy makers need to be aware that although HAART can delay disease progression, it is not a cure and HIV is not over. The results presented here suggest that the decreases in the morbidity and mortality due to HIV are transient. Policymakers need to be prepared for the eventual increase in AIDS incidence and mortality. Costs associated with HIV/AIDS are also projected to increase. The cost savings seen recently have been from the dramatic decreases in the incidence of AIDS defining opportunistic infections. As patients who have been on HAART the longest start to progress to AIDS, policymakers and insurance companies will find that the cost of treating HIV/AIDS will increase.
Resumo:
For some time now, the Latino voice has been gradually gaining strength in American politics, particularly in such states as California, Florida, Illinois, New York, and Texas, where large numbers of Latino immigrants have settled and large numbers of electoral votes are at stake. Yet the issues public officials in these states espouse and the laws they enact often do not coincide with the interests and preferences of Latinos. The fact that Latinos in California and elsewhere have not been able to influence the political agenda in a way that is commensurate with their numbers may reflect their failure to participate fully in the political process by first registering to vote and then consistently turning out on election day to cast their ballots.
To understand Latino voting behavior, I first examine Latino political participation in California during the ten general elections of the 1980s and 1990s, seeking to understand what percentage of the eligible Latino population registers to vote, with what political party they register, how many registered Latinos to go the polls on election day, and what factors might increase their participation in politics. To ensure that my findings are not unique to California, I also consider Latino voter registration and turnout in Texas for the five general elections of the 1990s and compare these results with my California findings.
I offer a new approach to studying Latino political participation in which I rely on county-level aggregate data, rather than on individual survey data, and employ the ecological inference method of generalized bounds. I calculate and compare Latino and white voting-age populations, registration rates, turnout rates, and party affiliation rates for California's fifty-eight counties. Then, in a secondary grouped logit analysis, I consider the factors that influence these Latino and white registration, turnout, and party affiliation rates.
I find that California Latinos register and turn out at substantially lower rates than do whites and that these rates are more volatile than those of whites. I find that Latino registration is motivated predominantly by age and education, with older and more educated Latinos being more likely to register. Motor voter legislation, which was passed to ease and simplify the registration process, has not encouraged Latino registration . I find that turnout among California's Latino voters is influenced primarily by issues, income, educational attainment, and the size of the Spanish-speaking communities in which they reside. Although language skills may be an obstacle to political participation for an individual, the number of Spanish-speaking households in a community does not encourage or discourage registration but may encourage turnout, suggesting that cultural and linguistic assimilation may not be the entire answer.
With regard to party identification, I find that Democrats can expect a steady Latino political identification rate between 50 and 60 percent, while Republicans attract 20 to 30 percent of Latino registrants. I find that education and income are the dominant factors in determining Latino political party identification, which appears to be no more volatile than that of the larger electorate.
Next, when I consider registration and turnout in Texas, I find that Latino registration rates are nearly equal to those of whites but that Texas Latino turnout rates are volatile and substantially lower than those of whites.
Low turnout rates among Latinos and the volatility of these rates may explain why Latinos in California and Texas have had little influence on the political agenda even though their numbers are large and increasing. Simply put, the voices of Latinos are little heard in the halls of government because they do not turn out consistently to cast their votes on election day.
While these findings suggest that there may not be any short-term or quick fixes to Latino participation, they also suggest that Latinos should be encouraged to participate more fully in the political process and that additional education may be one means of achieving this goal. Candidates should speak more directly to the issues that concern Latinos. Political parties should view Latinos as crossover voters rather than as potential converts. In other words, if Latinos were "a sleeping giant," they may now be a still-drowsy leviathan waiting to be wooed by either party's persuasive political messages and relevant issues.
Resumo:
Part I: An approach to the total synthesis of the triterpene shionone is described, which proceeds through the tetracyclic ketone i. The shionone side chain has been attached to this key intermediate in 5 steps, affording the olefin 2 in 29% yield. A method for the stereo-specific introduction of the angular methyl group at C-5 of shionone has been developed on a model system. The attempted utilization of this method to convert olefin 2 into shionone is described.
Part II: A method has been developed for activating the C-9 and C-10 positions of estrogenic steroids for substitution. Estrone has been converted to 4β,5β-epoxy-10β-hydroxyestr-3-one; cleavage of this epoxyketone using an Eschenmoser procedure, and subsequent modification of the product afforded 4-seco-9-estren-3,5-dione 3-ethylene acetal. This versatile intermediate, suitable for substitution at the 9 and/or 10 position, was converted to androst-4-ene-3-one by known procedures.
Resumo:
The velocity of selectively-introduced edge dislocations in 99.999 percent pure copper crystals has been measured as a function of stress at temperatures from 66°K to 373°K by means of a torsion technique. The range of resolved shear stress was 0 to 15 megadynes/ cm^2 for seven temperatures (66°K, 74°K, 83°K, 123°K, 173°K, 296°K, 296°K, 373°K.
Dislocation mobility is characterized by two distinct features; (a) relatively high velocity at low stress (maximum velocities of about 9000 em/sec were realized at low temperatures), and (b) increasing velocity with decreasing temperature at constant stress.
The relation between dislocation velocity and resolved shear stress is:
v = v_o(τ_r/τ_o)^n
where v is the dislocation velocity at resolved shear stress τ_r, v_o is a constant velocity chosen equal to 2000 cm/ sec, τ_o is the resolved shear stress required to maintain velocity v_o, and n is the mobility coefficient. The experimental results indicate that τ_o decreases from 16.3 x 10^6 to 3.3 x 10^6 dynes/cm^2 and n increases from about 0.9 to 1.1 as the temperature is lowered from 296°K to 66°K.
The experimental dislocation behavior is consistent with an interpretation on the basis of phonon drag. However, the complete temperature dependence of dislocation mobility could not be closely approximated by the predictions of one or a combination of mechanisms.
Resumo:
The influence upon the basic viscous flow about two axisymmetric bodies of (i) freestream turbulence level and (ii) the injection of small amounts of a drag-reducing polymer (Polyox WSR 301) into the test model boundary layer was investigated by the schlieren flow visualization technique. The changes in the type and occurrence of cavitation inception caused by the subsequent modifications in the viscous flow were studied. A nuclei counter using the holographic technique was built to monitor freestream nuclei populations and a few preliminary tests investigating the consequences of different populations on cavitation inception were carried out.
Both test models were observed to have a laminar separation over their respective test Reynolds number ranges. The separation on one test model was found to be insensitive to freestream turbulence levels of up to 3.75 percent. The second model was found to be very susceptible having its critical velocity reduced from 30 feet per second at a 0.04 percent turbulence level to 10 feet per second at a 3.75 percent turbulence level. Cavitation tests on both models at the lowest turbulence level showed the value of the incipient cavitation number and the type of cavitation were controlled by the presence of the laminar separation. Cavitation tests on the second model at 0.65 percent turbulence level showed no change in the inception index, but the appearance of the developed cavitation was altered.
The presence of Polyox in the boundary layer resulted in a cavitation suppression comparable to that found by other investigators. The elimination of the normally occurring laminar separation on these bodies by a polymer-induced instability in the laminar boundary layer was found to be responsible for the suppression of inception.
Freestream nuclei populations at test conditions were measured and it was found that if there were many freestream gas bubbles the normally present laminar separation was elminated and travelling bubble type cavitation occurred - the value of the inception index then depended upon the nuclei population. In cases where the laminar separation was present it was found that the value of the inception index was insensitive to the free stream nuclei populations.
Resumo:
We develop a method for performing one-loop calculations in finite systems that is based on using the WKB approximation for the high energy states. This approximation allows us to absorb all the counterterms analytically and thereby avoids the need for extreme numerical precision that was required by previous methods. In addition, the local approximation makes this method well suited for self-consistent calculations. We then discuss the application of relativistic mean field methods to the atomic nucleus. Self-consistent, one loop calculations in the Walecka model are performed and the role of the vacuum in this model is analyzed. This model predicts that vacuum polarization effects are responsible for up to five percent of the local nucleon density. Within this framework the possible role of strangeness degrees of freedom is studied. We find that strangeness polarization can increase the kaon-nucleus scattering cross section by ten percent. By introducing a cutoff into the model, the dependence of the model on short-distance physics, where its validity is doubtful, is calculated. The model is very sensitive to cutoffs around one GeV.
Resumo:
Politically the Colorado river is an interstate as well as an international stream. Physically the basin divides itself distinctly into three sections. The upper section from head waters to the mouth of San Juan comprises about 40 percent of the total of the basin and affords about 87 percent of the total runoff, or an average of about 15 000 000 acre feet per annum. High mountains and cold weather are found in this section. The middle section from the mouth of San Juan to the mouth of the Williams comprises about 35 percent of the total area of the basin and supplies about 7 percent of the annual runoff. Narrow canyons and mild weather prevail in this section. The lower third of the basin is composed of mainly hot arid plains of low altitude. It comprises some 25 percent of the total area of the basin and furnishes about 6 percent of the average annual runoff.
The proposed Diamond Creek reservoir is located in the middle section and is wholly within the boundary of Arizona. The site is at the mouth of Diamond Creek and is only 16 m. from Beach Spring, a station on the Santa Fe railroad. It is solely a power project with a limited storage capacity. The dam which creats the reservoir is of the gravity type to be constructed across the river. The walls and foundation are of granite. For a dam of 290 feet in height, the back water will be about 25 m. up the river.
The power house will be placed right below the dam perpendicular to the axis of the river. It is entirely a concrete structure. The power installation would consist of eighteen 37 500 H.P. vertical, variable head turbines, directly connected to 28 000 kwa. 110 000 v. 3 phase, 60 cycle generators with necessary switching and auxiliary apparatus. Each unit is to be fed by a separate penstock wholly embedded into the masonry.
Concerning the power market, the main electric transmission lines would extend to Prescott, Phoenix, Mesa, Florence etc. The mining regions of the mountains of Arizona would be the most adequate market. The demand of power in the above named places might not be large at present. It will, from the observation of the writer, rapidly increase with the wonderful advancement of all kinds of industrial development.
All these things being comparatively feasible, there is one difficult problem: that is the silt. At the Diamond Creek dam site the average annual silt discharge is about 82 650 acre feet. The geographical conditions, however, will not permit silt deposites right in the reservoir. So this design will be made under the assumption given in Section 4.
The silt condition and the change of lower course of the Colorado are much like those of the Yellow River in China. But one thing is different. On the Colorado most of the canyon walls are of granite, while those on the Yellow are of alluvial loess: so it is very hard, if not impossible, to get a favorable dam site on the lower part. As a visitor to this country, I should like to see the full development of the Colorado: but how about THE YELLOW!
Resumo:
Experiments have been accomplished that (a) further define the nature of the strong, G-containing DNA binding sites for actinomycin D (AMD), and (b) quantitate the in vitro inhibition of E. coli RNA polymerase activity by T7 DNA-bound AMD.
Twenty-five to forty percent of the G's of crab dAT are disallowed as strong AMD binding sites. The G's are measured to be randomly distributed, and, therefore, this datum cannot be explained on the basis of steric interference alone. Poly dAC:TG binds as much AMD and as strongly as any natural DNA, so the hypothesis that the unique strong AMD binding sites are G and a neighboring purine is incorrect. The datum can be explained on the basis of both steric interference and the fact that TGA is a disallowed sequence for strong AMD binding.
Using carefully defined in vitro conditions, there is one RNA synthesized per T7 DNA by E. coli RNA polymerase. The rate of the RNA polymerase-catalyzed reaction conforms to the equation 1/rate = 1/kA(ATP) + 1/KG(GTP) + 1/KC(CTP) + 1/KU(UTP) T7 DNA-bound AMD has only modest effects on initiation and termination of the polymerase-catalyzed reaction, but a large inhibitory effect on propagation. In the presence of bound AMD, kG and kC are decreased, whereas kA and kU are unaffected. These facts are interpreted to mean that on the microscopic level, on the average, the rates of incorporation of ATP and UTP are the same in the absence or presence of bound AMD, but that the rates of incorporation of GTP and CTP are decreased in the presence of AMD.
Resumo:
Part I
Solutions of Schrödinger’s equation for system of two particles bound in various stationary one-dimensional potential wells and repelling each other with a Coulomb force are obtained by the method of finite differences. The general properties of such systems are worked out in detail for the case of two electrons in an infinite square well. For small well widths (1-10 a.u.) the energy levels lie above those of the noninteresting particle model by as much as a factor of 4, although excitation energies are only half again as great. The analytical form of the solutions is obtained and it is shown that every eigenstate is doubly degenerate due to the “pathological” nature of the one-dimensional Coulomb potential. This degeneracy is verified numerically by the finite-difference method. The properties of the square-well system are compared with those of the free-electron and hard-sphere models; perturbation and variational treatments are also carried out using the hard-sphere Hamiltonian as a zeroth-order approximation. The lowest several finite-difference eigenvalues converge from below with decreasing mesh size to energies below those of the “best” linear variational function consisting of hard-sphere eigenfunctions. The finite-difference solutions in general yield expectation values and matrix elements as accurate as those obtained using the “best” variational function.
The system of two electrons in a parabolic well is also treated by finite differences. In this system it is possible to separate the center-of-mass motion and hence to effect a considerable numerical simplification. It is shown that the pathological one-dimensional Coulomb potential gives rise to doubly degenerate eigenstates for the parabolic well in exactly the same manner as for the infinite square well.
Part II
A general method of treating inelastic collisions quantum mechanically is developed and applied to several one-dimensional models. The formalism is first developed for nonreactive “vibrational” excitations of a bound system by an incident free particle. It is then extended to treat simple exchange reactions of the form A + BC →AB + C. The method consists essentially of finding a set of linearly independent solutions of the Schrödinger equation such that each solution of the set satisfies a distinct, yet arbitrary boundary condition specified in the asymptotic region. These linearly independent solutions are then combined to form a total scattering wavefunction having the correct asymptotic form. The method of finite differences is used to determine the linearly independent functions.
The theory is applied to the impulsive collision of a free particle with a particle bound in (1) an infinite square well and (2) a parabolic well. Calculated transition probabilities agree well with previously obtained values.
Several models for the exchange reaction involving three identical particles are also treated: (1) infinite-square-well potential surface, in which all three particles interact as hard spheres and each two-particle subsystem (i.e. BC and AB) is bound by an attractive infinite-square-well potential; (2) truncated parabolic potential surface, in which the two-particle subsystems are bound by a harmonic oscillator potential which becomes infinite for interparticle separations greater than a certain value; (3) parabolic (untruncated) surface. Although there are no published values with which to compare our reaction probabilities, several independent checks on internal consistency indicate that the results are reliable.