7 resultados para ddc:670

em CaltechTHESIS


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Two of the most important questions in mantle dynamics are investigated in three separate studies: the influence of phase transitions (studies 1 and 2), and the influence of temperature-dependent viscosity (study 3).

(1) Numerical modeling of mantle convection in a three-dimensional spherical shell incorporating the two major mantle phase transitions reveals an inherently three-dimensional flow pattern characterized by accumulation of cold downwellings above the 670 km discontinuity, and cylindrical 'avalanches' of upper mantle material into the lower mantle. The exothermic phase transition at 400 km depth reduces the degree of layering. A region of strongly-depressed temperature occurs at the base of the mantle. The temperature field is strongly modulated by this partial layering, both locally and in globally-averaged diagnostics. Flow penetration is strongly wavelength-dependent, with easy penetration at long wavelengths but strong inhibition at short wavelengths. The amplitude of the geoid is not significantly affected.

(2) Using a simple criterion for the deflection of an upwelling or downwelling by an endothermic phase transition, the scaling of the critical phase buoyancy parameter with the important lengthscales is obtained. The derived trends match those observed in numerical simulations, i.e., deflection is enhanced by (a) shorter wavelengths, (b) narrower up/downwellings (c) internal heating and (d) narrower phase loops.

(3) A systematic investigation into the effects of temperature-dependent viscosity on mantle convection has been performed in three-dimensional Cartesian geometry, with a factor of 1000-2500 viscosity variation, and Rayleigh numbers of 10^5-10^7. Enormous differences in model behavior are found, depending on the details of rheology, heating mode, compressibility and boundary conditions. Stress-free boundaries, compressibility, and temperature-dependent viscosity all favor long-wavelength flows, even in internally heated cases. However, small cells are obtained with some parameter combinations. Downwelling plumes and upwelling sheets are possible when viscosity is dependent solely on temperature. Viscous dissipation becomes important with temperature-dependent viscosity.

The sensitivity of mantle flow and structure to these various complexities illustrates the importance of performing mantle convection calculations with rheological and thermodynamic properties matching as closely as possible those of the Earth.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis I apply paleomagnetic techniques to paleoseismological problems. I investigate the use of secular-variation magnetostratigraphy to date prehistoric earthquakes; I identify liquefaction remanent magnetization (LRM), and I quantify coseismic deformation within a fault zone by measuring the rotation of paleomagnetic vectors.

In Chapter 2 I construct a secular-variation reference curve for southern California. For this curve I measure three new well-constrained paleomagnetic directions: two from the Pallett Creek paleoseismological site at A.D. 1397-1480 and A.D. 1465-1495, and one from Panum Crater at A.D. 1325-1365. To these three directions I add the best nine data points from the Sternberg secular-variation curve, five data points from Champion, and one point from the A.D. 1480 eruption of Mt. St. Helens. I derive the error due to the non-dipole field that is added to these data by the geographical correction to southern California. Combining these yields a secular variation curve for southern California covering the period A.D. 670 to 1910, with the best coverage in the range A.D. 1064 to 1505.

In Chapter 3 I apply this curve to a problem in southern California. Two paleoseismological sites in the Salton trough of southern California have sediments deposited by prehistoric Lake Cahuilla. At the Salt Creek site I sampled sediments from three different lakes, and at the Indio site I sampled sediments from four different lakes. Based upon the coinciding paleomagnetic directions I correlate the oldest lake sampled at Salt Creek with the oldest lake sampled at Indio. Furthermore, the penultimate lake at Indio does not appear to be present at Salt Creek. Using the secular variation curve I can assign the lakes at Salt Creek to broad age ranges of A.D. 800 to 1100, A.D. 1100 to 1300, and A.D. 1300 to 1500. This example demonstrates the large uncertainties in the secular variation curve and the need to construct curves from a limited geographical area.

Chapter 4 demonstrates that seismically induced liquefaction can cause resetting of detrital remanent magnetization and acquisition of a liquefaction remanent magnetization (LRM). I sampled three different liquefaction features, a sandbody formed in the Elsinore fault zone, diapirs from sediments of Mono Lake, and a sandblow in these same sediments. In every case the liquefaction features showed stable magnetization despite substantial physical disruption. In addition, in the case of the sandblow and the sandbody, the intensity of the natural remanent magnetization increased by up to an order of magnitude.

In Chapter 5 I apply paleomagnetics to measuring the tectonic rotations in a 52 meter long transect across the San Andreas fault zone at the Pallett Creek paleoseismological site. This site has presented a significant problem because the brittle long-term average slip-rate across the fault is significantly less than the slip-rate from other nearby sites. I find sections adjacent to the fault with tectonic rotations of up to 30°. If interpreted as block rotations, the non-brittle offset was 14.0+2.8, -2.1 meters in the last three earthquakes and 8.5+1.0, -0.9 meters in the last two. Combined with the brittle offset in these events, the last three events all had about 6 meters of total fault offset, even though the intervals between them were markedly different.

In Appendix 1 I present a detailed description of my standard sampling and demagnetization procedure.

In Appendix 2 I present a detailed discussion of the study at Panum Crater that yielded the well-constrained paleomagnetic direction for use in developing secular variation curve in Chapter 2. In addition, from sampling two distinctly different clast types in a block-and-ash flow deposit from Panum Crater, I find that this flow had a complex emplacement and cooling history. Angular, glassy "lithic" blocks were emplaced at temperatures above 600° C. Some of these had cooled nearly completely, whereas others had cooled only to 450° C, when settling in the flow rotated the blocks slightly. The partially cooled blocks then finished cooling without further settling. Highly vesicular, breadcrusted pumiceous clasts had not yet cooled to 600° C at the time of these rotations, because they show a stable, well clustered, unidirectional magnetic vector.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The initial objective of Part I was to determine the nature of upper mantle discontinuities, the average velocities through the mantle, and differences between mantle structure under continents and oceans by the use of P'dP', the seismic core phase P'P' (PKPPKP) that reflects at depth d in the mantle. In order to accomplish this, it was found necessary to also investigate core phases themselves and their inferences on core structure. P'dP' at both single stations and at the LASA array in Montana indicates that the following zones are candidates for discontinuities with varying degrees of confidence: 800-950 km, weak; 630-670 km, strongest; 500-600 km, strong but interpretation in doubt; 350-415 km, fair; 280-300 km, strong, varying in depth; 100-200 km, strong, varying in depth, may be the bottom of the low-velocity zone. It is estimated that a single station cannot easily discriminate between asymmetric P'P' and P'dP' for lead times of about 30 sec from the main P'P' phase, but the LASA array reduces this uncertainty range to less than 10 sec. The problems of scatter of P'P' main-phase times, mainly due to asymmetric P'P', incorrect identification of the branch, and lack of the proper velocity structure at the velocity point, are avoided and the analysis shows that one-way travel of P waves through oceanic mantle is delayed by 0.65 to 0.95 sec relative to United States mid-continental mantle.

A new P-wave velocity core model is constructed from observed times, dt/dΔ's, and relative amplitudes of P'; the observed times of SKS, SKKS, and PKiKP; and a new mantle-velocity determination by Jordan and Anderson. The new core model is smooth except for a discontinuity at the inner-core boundary determined to be at a radius of 1215 km. Short-period amplitude data do not require the inner core Q to be significantly lower than that of the outer core. Several lines of evidence show that most, if not all, of the arrivals preceding the DF branch of P' at distances shorter than 143° are due to scattering as proposed by Haddon and not due to spherically symmetric discontinuities just above the inner core as previously believed. Calculation of the travel-time distribution of scattered phases and comparison with published data show that the strongest scattering takes place at or near the core-mantle boundary close to the seismic station.

In Part II, the largest events in the San Fernando earthquake series, initiated by the main shock at 14 00 41.8 GMT on February 9, 1971, were chosen for analysis from the first three months of activity, 87 events in all. The initial rupture location coincides with the lower, northernmost edge of the main north-dipping thrust fault and the aftershock distribution. The best focal mechanism fit to the main shock P-wave first motions constrains the fault plane parameters to: strike, N 67° (± 6°) W; dip, 52° (± 3°) NE; rake, 72° (67°-95°) left lateral. Focal mechanisms of the aftershocks clearly outline a downstep of the western edge of the main thrust fault surface along a northeast-trending flexure. Faulting on this downstep is left-lateral strike-slip and dominates the strain release of the aftershock series, which indicates that the downstep limited the main event rupture on the west. The main thrust fault surface dips at about 35° to the northeast at shallow depths and probably steepens to 50° below a depth of 8 km. This steep dip at depth is a characteristic of other thrust faults in the Transverse Ranges and indicates the presence at depth of laterally-varying vertical forces that are probably due to buckling or overriding that causes some upward redirection of a dominant north-south horizontal compression. Two sets of events exhibit normal dip-slip motion with shallow hypocenters and correlate with areas of ground subsidence deduced from gravity data. Several lines of evidence indicate that a horizontal compressional stress in a north or north-northwest direction was added to the stresses in the aftershock area 12 days after the main shock. After this change, events were contained in bursts along the downstep and sequencing within the bursts provides evidence for an earthquake-triggering phenomenon that propagates with speeds of 5 to 15 km/day. Seismicity before the San Fernando series and the mapped structure of the area suggest that the downstep of the main fault surface is not a localized discontinuity but is part of a zone of weakness extending from Point Dume, near Malibu, to Palmdale on the San Andreas fault. This zone is interpreted as a decoupling boundary between crustal blocks that permits them to deform separately in the prevalent crustal-shortening mode of the Transverse Ranges region.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The epidemic of HIV/AIDS in the United States is constantly changing and evolving, starting from patient zero to now an estimated 650,000 to 900,000 Americans infected. The nature and course of HIV changed dramatically with the introduction of antiretrovirals. This discourse examines many different facets of HIV from the beginning where there wasn't any treatment for HIV until the present era of highly active antiretroviral therapy (HAART). By utilizing statistical analysis of clinical data, this paper examines where we were, where we are and projections as to where treatment of HIV/AIDS is headed.

Chapter Two describes the datasets that were used for the analyses. The primary database utilized was collected by myself from an outpatient HIV clinic. The data included dates from 1984 until the present. The second database was from the Multicenter AIDS Cohort Study (MACS) public dataset. The data from the MACS cover the time between 1984 and October 1992. Comparisons are made between both datasets.

Chapter Three discusses where we were. Before the first anti-HIV drugs (called antiretrovirals) were approved, there was no treatment to slow the progression of HIV. The first generation of antiretrovirals, reverse transcriptase inhibitors such as AZT (zidovudine), DDI (didanosine), DDC (zalcitabine), and D4T (stavudine) provided the first treatment for HIV. The first clinical trials showed that these antiretrovirals had a significant impact on increasing patient survival. The trials also showed that patients on these drugs had increased CD4+ T cell counts. Chapter Three examines the distributions of CD4 T cell counts. The results show that the estimated distributions of CD4 T cell counts are distinctly non-Gaussian. Thus distributional assumptions regarding CD4 T cell counts must be taken, into account when performing analyses with this marker. The results also show the estimated CD4 T cell distributions for each disease stage: asymptomatic, symptomatic and AIDS are non-Gaussian. Interestingly, the distribution of CD4 T cell counts for the asymptomatic period is significantly below that of the CD4 T cell distribution for the uninfected population suggesting that even in patients with no outward symptoms of HIV infection, there exists high levels of immunosuppression.

Chapter Four discusses where we are at present. HIV quickly grew resistant to reverse transcriptase inhibitors which were given sequentially as mono or dual therapy. As resistance grew, the positive effects of the reverse transcriptase inhibitors on CD4 T cell counts and survival dissipated. As the old era faded a new era characterized by a new class of drugs and new technology changed the way that we treat HIV-infected patients. Viral load assays were able to quantify the levels of HIV RNA in the blood. By quantifying the viral load, one now had a faster, more direct way to test antiretroviral regimen efficacy. Protease inhibitors, which attacked a different region of HIV than reverse transcriptase inhibitors, when used in combination with other antiretroviral agents were found to dramatically and significantly reduce the HIV RNA levels in the blood. Patients also experienced significant increases in CD4 T cell counts. For the first time in the epidemic, there was hope. It was hypothesized that with HAART, viral levels could be kept so low that the immune system as measured by CD4 T cell counts would be able to recover. If these viral levels could be kept low enough, it would be possible for the immune system to eradicate the virus. The hypothesis of immune reconstitution, that is bringing CD4 T cell counts up to levels seen in uninfected patients, is tested in Chapter Four. It was found that for these patients, there was not enough of a CD4 T cell increase to be consistent with the hypothesis of immune reconstitution.

In Chapter Five, the effectiveness of long-term HAART is analyzed. Survival analysis was conducted on 213 patients on long-term HAART. The primary endpoint was presence of an AIDS defining illness. A high level of clinical failure, or progression to an endpoint, was found.

Chapter Six yields insights into where we are going. New technology such as viral genotypic testing, that looks at the genetic structure of HIV and determines where mutations have occurred, has shown that HIV is capable of producing resistance mutations that confer multiple drug resistance. This section looks at resistance issues and speculates, ceterus parabis, where the state of HIV is going. This section first addresses viral genotype and the correlates of viral load and disease progression. A second analysis looks at patients who have failed their primary attempts at HAART and subsequent salvage therapy. It was found that salvage regimens, efforts to control viral replication through the administration of different combinations of antiretrovirals, were not effective in 90 percent of the population in controlling viral replication. Thus, primary attempts at therapy offer the best change of viral suppression and delay of disease progression. Documentation of transmission of drug-resistant virus suggests that the public health crisis of HIV is far from over. Drug resistant HIV can sustain the epidemic and hamper our efforts to treat HIV infection. The data presented suggest that the decrease in the morbidity and mortality due to HIV/AIDS is transient. Deaths due to HIV will increase and public health officials must prepare for this eventuality unless new treatments become available. These results also underscore the importance of the vaccine effort.

The final chapter looks at the economic issues related to HIV. The direct and indirect costs of treating HIV/AIDS are very high. For the first time in the epidemic, there exists treatment that can actually slow disease progression. The direct costs for HAART are estimated. It is estimated that the direct lifetime costs for treating each HIV infected patient with HAART is between $353,000 to $598,000 depending on how long HAART prolongs life. If one looks at the incremental cost per year of life saved it is only $101,000. This is comparable with the incremental costs per year of life saved from coronary artery bypass surgery.

Policy makers need to be aware that although HAART can delay disease progression, it is not a cure and HIV is not over. The results presented here suggest that the decreases in the morbidity and mortality due to HIV are transient. Policymakers need to be prepared for the eventual increase in AIDS incidence and mortality. Costs associated with HIV/AIDS are also projected to increase. The cost savings seen recently have been from the dramatic decreases in the incidence of AIDS defining opportunistic infections. As patients who have been on HAART the longest start to progress to AIDS, policymakers and insurance companies will find that the cost of treating HIV/AIDS will increase.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An economic air pollution control model, which determines the least cost of reaching various air quality levels, is formulated. The model takes the form of a general, nonlinear, mathematical programming problem. Primary contaminant emission levels are the independent variables. The objective function is the cost of attaining various emission levels and is to be minimized subject to constraints that given air quality levels be attained.

The model is applied to a simplified statement of the photochemical smog problem in Los Angeles County in 1975 with emissions specified by a two-dimensional vector, total reactive hydrocarbon, (RHC), and nitrogen oxide, (NOx), emissions. Air quality, also two-dimensional, is measured by the expected number of days per year that nitrogen dioxide, (NO2), and mid-day ozone, (O3), exceed standards in Central Los Angeles.

The minimum cost of reaching various emission levels is found by a linear programming model. The base or "uncontrolled" emission levels are those that will exist in 1975 with the present new car control program and with the degree of stationary source control existing in 1971. Controls, basically "add-on devices", are considered here for used cars, aircraft, and existing stationary sources. It is found that with these added controls, Los Angeles County emission levels [(1300 tons/day RHC, 1000 tons /day NOx) in 1969] and [(670 tons/day RHC, 790 tons/day NOx) at the base 1975 level], can be reduced to 260 tons/day RHC (minimum RHC program) and 460 tons/day NOx (minimum NOx program).

"Phenomenological" or statistical air quality models provide the relationship between air quality and emissions. These models estimate the relationship by using atmospheric monitoring data taken at one (yearly) emission level and by using certain simple physical assumptions, (e. g., that emissions are reduced proportionately at all points in space and time). For NO2, (concentrations assumed proportional to NOx emissions), it is found that standard violations in Central Los Angeles, (55 in 1969), can be reduced to 25, 5, and 0 days per year by controlling emissions to 800, 550, and 300 tons /day, respectively. A probabilistic model reveals that RHC control is much more effective than NOx control in reducing Central Los Angeles ozone. The 150 days per year ozone violations in 1969 can be reduced to 75, 30, 10, and 0 days per year by abating RHC emissions to 700, 450, 300, and 150 tons/day, respectively, (at the 1969 NOx emission level).

The control cost-emission level and air quality-emission level relationships are combined in a graphical solution of the complete model to find the cost of various air quality levels. Best possible air quality levels with the controls considered here are 8 O3 and 10 NO2 violations per year (minimum ozone program) or 25 O3 and 3 NO2 violations per year (minimum NO2 program) with an annualized cost of $230,000,000 (above the estimated $150,000,000 per year for the new car control program for Los Angeles County motor vehicles in 1975).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

(1) Equation of State of Komatiite

The equation of state (EOS) of a molten komatiite (27 wt% MgO) was detennined in the 5 to 36 GPa pressure range via shock wave compression from 1550°C and 0 bar. Shock wave velocity, US, and particle velocity, UP, in km/s follow the linear relationship US = 3.13(±0.03) + 1.47(±0.03) UP. Based on a calculated density at 1550°C, 0 bar of 2.745±0.005 glee, this US-UP relationship gives the isentropic bulk modulus KS = 27.0 ± 0.6 GPa, and its first and second isentropic pressure derivatives, K'S = 4.9 ± 0.1 and K"S = -0.109 ± 0.003 GPa-1.

The calculated liquidus compression curve agrees within error with the static compression results of Agee and Walker [1988a] to 6 GPa. We detennine that olivine (FO94) will be neutrally buoyant in komatiitic melt of the composition we studied near 8.2 GPa. Clinopyroxene would also be neutrally buoyant near this pressure. Liquidus garnet-majorite may be less dense than this komatiitic liquid in the 20-24 GPa interval, however pyropic-garnet and perovskite phases are denser than this komatiitic liquid in their respective liquidus pressure intervals to 36 GPa. Liquidus perovskite may be neutrally buoyant near 70 GPa.

At 40 GPa, the density of shock-compressed molten komatiite would be approximately equal to the calculated density of an equivalent mixture of dense solid oxide components. This observation supports the model of Rigden et al. [1989] for compressibilities of liquid oxide components. Using their theoretical EOS for liquid forsterite and fayalite, we calculate the densities of a spectrum of melts from basaltic through peridotitic that are related to the experimentally studied komatiitic liquid by addition or subtraction of olivine. At low pressure, olivine fractionation lowers the density of basic magmas, but above 14 GPa this trend is reversed. All of these basic to ultrabasic liquids are predicted to have similar densities at 14 GPa, and this density is approximately equal to the bulk (PREM) mantle. This suggests that melts derived from a peridotitic mantle may be inhibited from ascending from depths greater than 400 km.

The EOS of ultrabasic magmas was used to model adiabatic melting in a peridotitic mantle. If komatiites are formed by >15% partial melting of a peridotitic mantle, then komatiites generated by adiabatic melting come from source regions in the lower transition zone (≈500-670 km) or the lower mantle (>670 km). The great depth of incipient melting implied by this model, and the melt density constraint mentioned above, suggest that komatiitic volcanism may be gravitationally hindered. Although komatiitic magmas are thought to separate from their coexisting crystals at a temperature =200°C greater than that for modern MORBs, their ultimate sources are predicted to be diapirs that, if adiabatically decompressed from initially solid mantle, were more than 700°C hotter than the sources of MORBs and derived from great depth.

We considered the evolution of an initially molten mantle, i.e., a magma ocean. Our model considers the thermal structure of the magma ocean, density constraints on crystal segregation, and approximate phase relationships for a nominally chondritic mantle. Crystallization will begin at the core-mantle boundary. Perovskite buoyancy at > 70 GPa may lead to a compositionally stratified lower mantle with iron-enriched mangesiowiistite content increasing with depth. The upper mantle may be depleted in perovskite components. Olivine neutral buoyancy may lead to the formation of a dunite septum in the upper mantle, partitioning the ocean into upper and lower reservoirs, but this septum must be permeable.

(2) Viscosity Measurement with Shock Waves

We have examined in detail the analytical method for measuring shear viscosity from the decay of perturbations on a corrugated shock front The relevance of initial conditions, finite shock amplitude, bulk viscosity, and the sensitivity of the measurements to the shock boundary conditions are discussed. The validity of the viscous perturbation approach is examined by numerically solving the second-order Navier-Stokes equations. These numerical experiments indicate that shock instabilities may occur even when the Kontorovich-D'yakov stability criteria are satisfied. The experimental results for water at 15 GPa are discussed, and it is suggested that the large effective viscosity determined by this method may reflect the existence of ice VII on the Rayleigh path of the Hugoniot This interpretation reconciles the experimental results with estimates and measurements obtained by other means, and is consistent with the relationship of the Hugoniot with the phase diagram for water. Sound waves are generated at 4.8 MHz at in the water experiments at 15 GPa. The existence of anelastic absorption modes near this frequency would also lead to large effective viscosity estimates.

(3) Equation of State of Molybdenum at 1400°C

Shock compression data to 96 GPa for pure molybdenum, initially heated to 1400°C, are presented. Finite strain analysis of the data gives a bulk modulus at 1400°C, K'S. of 244±2 GPa and its pressure derivative, K'OS of 4. A fit of shock velocity to particle velocity gives the coefficients of US = CO+S UP to be CO = 4.77±0.06 km/s and S = 1.43±0.05. From the zero pressure sound speed, CO, a bulk modulus of 232±6 GPa is calculated that is consistent with extrapolation of ultrasonic elasticity measurements. The temperature derivative of the bulk modulus at zero pressure, θKOSθT|P, is approximately -0.012 GPa/K. A thermodynamic model is used to show that the thermodynamic Grüneisen parameter is proportional to the density and independent of temperature. The Mie-Grüneisen equation of state adequately describes the high temperature behavior of molybdenum under the present range of shock loading conditions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Part I

Chapter 1.....A physicochemical study of the DNA molecules from the three bacteriophages, N1, N5, and N6, which infect the bacterium, M. lysodeikticus, has been made. The molecular weights, as measured by both electron microscopy and sedimentation velocity, are 23 x 106 for N5 DNA and 31 x 106 for N1 and N6 DNA's. All three DNA's are capable of thermally reversible cyclization. N1 and N6 DNA's have identical or very similar base sequences as judged by membrane filter hybridization and by electron microscope heteroduplex studies. They have identical or similar cohesive ends. These results are in accord with the close biological relation between N1 and N6 phages. N5 DNA is not closely related to N1 or N6 DNA. The denaturation Tm of all three DNA's is the same and corresponds to a (GC) content of 70%. However, the buoyant densities in CsCl of Nl and N6 DNA's are lower than expected, corresponding to predicted GC contents of 64 and 67%. The buoyant densities in Cs2SO4 are also somewhat anomalous. The buoyant density anomalies are probably due to the presence of odd bases. However, direct base composition analysis of N1 DNA by anion exchange chromatography confirms a GC content of 70%, and, in the elution system used, no peaks due to odd bases are present.

Chapter 2.....A covalently closed circular DNA form has been observed as an intracellular form during both productive and abortive infection processes in M. lysodeikticus. This species has been isolated by the method of CsC1-ethidium bromide centrifugation and examined with an electron microscope.

Chapter 3.....A minute circular DNA has been discovered as a homogeneous population in M. lysodeikticus. Its length and molecular weight as determined by electron microscopy are 0.445 μ and 0.88 x 106 daltons respectively. There is about one minicircle per bacterium.

Chapter 4.....Several strains of E. coli 15 harbor a prophage. Viral growth can be induced by exposing the host to mitomycin C or to uv irradiation. The coliphage 15 particles from E. coli 15 and E, coli 15 T- appear as normal phage with head and tail structure; the particles from E. coli 15 TAU are tailless. The complete particles exert a colicinogenic activity on E.coli 15 and 15 T-, the tailless particles do not. No host for a productive viral infection has been found and the phage may be defective. The properties of the DNA of the virus have been studied, mainly by electron microscopy. After induction but before lysis, a closed circular DNA with a contour length of about 11.9 μ is found in the bacterium; the mature phage DNA is a linear duplex and 7.5% longer than the intracellular circular form. This suggests the hypothesis that the mature phage DNA is terminally repetitious and circularly permuted. The hypothesis was confirmed by observing that denaturation and renaturation of the mature phage DNA produce circular duplexes with two single-stranded branches corresponding to the terminal repetition. The contour length of the mature phage DNA was measured relative to φX RFII DNA and λ DNA; the calculated molecular weight is 27 x 106. The length of the single-stranded terminal repetition was compared to the length of φX 174 DNA under conditions where single-stranded DNA is seen in an extended form in electron micrographs. The length of the terminal repetition is found to be 7.4% of the length of the nonrepetitious part of the coliphage 15 DNA. The number of base pairs in the terminal repetition is variable in different molecules, with a fractional standard deviation of 0.18 of the average number in the terminal repetition. A new phenomenon termed "branch migration" has been discovered in renatured circular molecules; it results in forked branches, with two emerging single strands, at the position of the terminal repetition. The distribution of branch separations between the two terminal repetitions in the population of renatured circular molecules was studied. The observed distribution suggests that there is an excluded volume effect in the renaturation of a population of circularly permuted molecules such that strands with close beginning points preferentially renature with each other. This selective renaturation and the phenomenon of branch migration both affect the distribution of branch separations; the observed distribution does not contradict the hypothesis of a random distribution of beginning points around the chromosome.

Chapter 5....Some physicochemical studies on the minicircular DNA species in E. coli 15 (0.670 μ, 1.47 x 106 daltons) have been made. Electron microscopic observations showed multimeric forms of the minicircle which amount to 5% of total DNA species and also showed presumably replicating forms of the minicircle. A renaturation kinetic study showed that the minicircle is a unique DNA species in its size and base sequence. A study on the minicircle replication has been made under condition in which host DNA synthesis is synchronized. Despite experimental uncertainties involved, it seems that the minicircle replication is random and the number of the minicircles increases continuously throughout a generation of the host, regardless of host DNA synchronization.

Part II

The flow dichroism of dilute DNA solutions (A260≈0.1) has been studied in a Couette-type apparatus with the outer cylinder rotating and with the light path parallel to the cylinder axis. Shear gradients in the range of 5-160 sec.-1 were studied. The DNA samples were whole, "half," and "quarter" molecules of T4 bacteriophage DNA, and linear and circular λb2b5c DNA. For the linear molecules, the fractional flow dichroism is a linear function of molecular weight. The dichroism for linear A DNA is about 1.8 that of the circular molecule. For a given DNA, the dichroism is an approximately linear function of shear gradient, but with a slight upward curvature at low values of G, and some trend toward saturation at larger values of G. The fractional dichroism increases as the supporting electrolyte concentration decreases.