16 resultados para Period before interference

em CaltechTHESIS


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The 1-6 MeV electron flux at 1 AU has been measured for the time period October 1972 to December 1977 by the Caltech Electron/Isotope Spectrometers on the IMP-7 and IMP-8 satellites. The non-solar interplanetary electron flux reported here covered parts of five synodic periods. The 88 Jovian increases identified in these five synodic periods were classified by their time profiles. The fall time profiles were consistent with an exponential fall with τ ≈ 4-9 days. The rise time profiles displayed a systematic variation over the synodic period. Exponential rise time profiles with τ ≈ 1-3 days tended to occur in the time period before nominal connection, diffusive profiles predicted by the convection-diffusion model around nominal connection, and abrupt profiles after nominal connection.

The times of enhancements in the magnetic field, │B│, at 1 AU showed a better correlation than corotating interaction regions (CIR's) with Jovian increases and other changes in the electron flux at 1 AU, suggesting that │B│ enhancements indicate the times that barriers to electron propagation pass Earth. Time sequences of the increases and decreases in the electron flux at 1 AU were qualitatively modeled by using the times that CIR's passed Jupiter and the times that │B│ enhancements passed Earth.

The electron data observed at 1 AU were modeled by using a convection-diffusion model of Jovian electron propagation. The synodic envelope formed by the maxima of the Jovian increases was modeled by the envelope formed by the predicted intensities at a time less than that needed to reach equilibrium. Even though the envelope shape calculated in this way was similar to the observed envelope, the required diffusion coefficients were not consistent with a diffusive process.

Three Jovian electron increases at 1 AU for the 1974 synodic period were fit with rise time profiles calculated from the convection-diffusion model. For the fits without an ambient electron background flux, the values for the diffusion coefficients that were consistent with the data were kx = 1.0 - 2.5 x 1021 cm2/sec and ky = 1.6 - 2.0 x 1022 cm2/sec. For the fits that included the ambient electron background flux, the values for the diffusion coefficients that were consistent with the data were kx = 0.4 - 1.0 x 1021 cm2/sec and ky = 0.8 - 1.3 x 1022 cm2/sec.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cancellation of interfering frequency-modulated (FM) signals is investigated with emphasis towards applications on the cellular telephone channel as an important example of a multiple access communications system. In order to fairly evaluate analog FM multiaccess systems with respect to more complex digital multiaccess systems, a serious attempt to mitigate interference in the FM systems must be made. Information-theoretic results in the field of interference channels are shown to motivate the estimation and subtraction of undesired interfering signals. This thesis briefly examines the relative optimality of the current FM techniques in known interference channels, before pursuing the estimation and subtracting of interfering FM signals.

The capture-effect phenomenon of FM reception is exploited to produce simple interference-cancelling receivers with a cross-coupled topology. The use of phase-locked loop receivers cross-coupled with amplitude-tracking loops to estimate the FM signals is explored. The theory and function of these cross-coupled phase-locked loop (CCPLL) interference cancellers are examined. New interference cancellers inspired by optimal estimation and the CCPLL topology are developed, resulting in simpler receivers than those in prior art. Signal acquisition and capture effects in these complex dynamical systems are explained using the relationship of the dynamical systems to adaptive noise cancellers.

FM interference-cancelling receivers are considered for increasing the frequency reuse in a cellular telephone system. Interference mitigation in the cellular environment is seen to require tracking of the desired signal during time intervals when it is not the strongest signal present. Use of interference cancelling in conjunction with dynamic frequency-allocation algorithms is viewed as a way of improving spectrum efficiency. Performance of interference cancellers indicates possibilities for greatly increased frequency reuse. The economics of receiver improvements in the cellular system is considered, including both the mobile subscriber equipment and the provider's tower (base station) equipment.

The thesis is divided into four major parts and a summary: the introduction, motivations for the use of interference cancellation, examination of the CCPLL interference canceller, and applications to the cellular channel. The parts are dependent on each other and are meant to be read as a whole.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The attitude of the medieval church towards violence before the First Crusade in 1095 underwent a significant institutional evolution, from the peaceful tradition of the New Testament and the Roman persecution, through the prelate-led military campaigns of the Carolingian period and the Peace of God era. It would be superficially easy to characterize this transformation as the pragmatic and entirely secular response of a growing power to the changing world. However, such a simplification does not fully do justice to the underlying theology. While church leaders from the 5th Century to the 11th had vastly different motivations and circumstances under which to develop their responses to a variety of violent activities, the teachings of Augustine of Hippo provided a unifying theme. Augustine’s just war theology, in establishing which conflicts are acceptable in the eyes of God, focused on determining whether a proper causa belli or basis for war exists, and then whether a legitimate authority declares and leads the war. Augustine masterfully integrated aspects of the Old and New Testaments to create a lasting and compelling case for his definition of justified violence. Although at different times and places his theology has been used to support a variety of different attitudes, the profound influence of his work on the medieval church’s evolving position on violence is clear.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This dissertation comprises three essays that use theory-based experiments to gain understanding of how cooperation and efficiency is affected by certain variables and institutions in different types of strategic interactions prevalent in our society.

Chapter 2 analyzes indefinite horizon two-person dynamic favor exchange games with private information in the laboratory. Using a novel experimental design to implement a dynamic game with a stochastic jump signal process, this study provides insights into a relation where cooperation is without immediate reciprocity. The primary finding is that favor provision under these conditions is considerably less than under the most efficient equilibrium. Also, individuals do not engage in exact score-keeping of net favors, rather, the time since the last favor was provided affects decisions to stop or restart providing favors.

Evidence from experiments in Cournot duopolies is presented in Chapter 3 where players indulge in a form of pre-play communication, termed as revision phase, before playing the one-shot game. During this revision phase individuals announce their tentative quantities, which are publicly observed, and revisions are costless. The payoffs are determined only by the quantities selected at the end under real time revision, whereas in a Poisson revision game, opportunities to revise arrive according to a synchronous Poisson process and the tentative quantity corresponding to the last revision opportunity is implemented. Contrasting results emerge. While real time revision of quantities results in choices that are more competitive than the static Cournot-Nash, significantly lower quantities are implemented in the Poisson revision games. This shows that partial cooperation can be sustained even when individuals interact only once.

Chapter 4 investigates the effect of varying the message space in a public good game with pre-play communication where player endowments are private information. We find that neither binary communication nor a larger finite numerical message space results in any efficiency gain relative to the situation without any form of communication. Payoffs and public good provision are higher only when participants are provided with a discussion period through unrestricted text chat.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The initial objective of Part I was to determine the nature of upper mantle discontinuities, the average velocities through the mantle, and differences between mantle structure under continents and oceans by the use of P'dP', the seismic core phase P'P' (PKPPKP) that reflects at depth d in the mantle. In order to accomplish this, it was found necessary to also investigate core phases themselves and their inferences on core structure. P'dP' at both single stations and at the LASA array in Montana indicates that the following zones are candidates for discontinuities with varying degrees of confidence: 800-950 km, weak; 630-670 km, strongest; 500-600 km, strong but interpretation in doubt; 350-415 km, fair; 280-300 km, strong, varying in depth; 100-200 km, strong, varying in depth, may be the bottom of the low-velocity zone. It is estimated that a single station cannot easily discriminate between asymmetric P'P' and P'dP' for lead times of about 30 sec from the main P'P' phase, but the LASA array reduces this uncertainty range to less than 10 sec. The problems of scatter of P'P' main-phase times, mainly due to asymmetric P'P', incorrect identification of the branch, and lack of the proper velocity structure at the velocity point, are avoided and the analysis shows that one-way travel of P waves through oceanic mantle is delayed by 0.65 to 0.95 sec relative to United States mid-continental mantle.

A new P-wave velocity core model is constructed from observed times, dt/dΔ's, and relative amplitudes of P'; the observed times of SKS, SKKS, and PKiKP; and a new mantle-velocity determination by Jordan and Anderson. The new core model is smooth except for a discontinuity at the inner-core boundary determined to be at a radius of 1215 km. Short-period amplitude data do not require the inner core Q to be significantly lower than that of the outer core. Several lines of evidence show that most, if not all, of the arrivals preceding the DF branch of P' at distances shorter than 143° are due to scattering as proposed by Haddon and not due to spherically symmetric discontinuities just above the inner core as previously believed. Calculation of the travel-time distribution of scattered phases and comparison with published data show that the strongest scattering takes place at or near the core-mantle boundary close to the seismic station.

In Part II, the largest events in the San Fernando earthquake series, initiated by the main shock at 14 00 41.8 GMT on February 9, 1971, were chosen for analysis from the first three months of activity, 87 events in all. The initial rupture location coincides with the lower, northernmost edge of the main north-dipping thrust fault and the aftershock distribution. The best focal mechanism fit to the main shock P-wave first motions constrains the fault plane parameters to: strike, N 67° (± 6°) W; dip, 52° (± 3°) NE; rake, 72° (67°-95°) left lateral. Focal mechanisms of the aftershocks clearly outline a downstep of the western edge of the main thrust fault surface along a northeast-trending flexure. Faulting on this downstep is left-lateral strike-slip and dominates the strain release of the aftershock series, which indicates that the downstep limited the main event rupture on the west. The main thrust fault surface dips at about 35° to the northeast at shallow depths and probably steepens to 50° below a depth of 8 km. This steep dip at depth is a characteristic of other thrust faults in the Transverse Ranges and indicates the presence at depth of laterally-varying vertical forces that are probably due to buckling or overriding that causes some upward redirection of a dominant north-south horizontal compression. Two sets of events exhibit normal dip-slip motion with shallow hypocenters and correlate with areas of ground subsidence deduced from gravity data. Several lines of evidence indicate that a horizontal compressional stress in a north or north-northwest direction was added to the stresses in the aftershock area 12 days after the main shock. After this change, events were contained in bursts along the downstep and sequencing within the bursts provides evidence for an earthquake-triggering phenomenon that propagates with speeds of 5 to 15 km/day. Seismicity before the San Fernando series and the mapped structure of the area suggest that the downstep of the main fault surface is not a localized discontinuity but is part of a zone of weakness extending from Point Dume, near Malibu, to Palmdale on the San Andreas fault. This zone is interpreted as a decoupling boundary between crustal blocks that permits them to deform separately in the prevalent crustal-shortening mode of the Transverse Ranges region.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The epidemic of HIV/AIDS in the United States is constantly changing and evolving, starting from patient zero to now an estimated 650,000 to 900,000 Americans infected. The nature and course of HIV changed dramatically with the introduction of antiretrovirals. This discourse examines many different facets of HIV from the beginning where there wasn't any treatment for HIV until the present era of highly active antiretroviral therapy (HAART). By utilizing statistical analysis of clinical data, this paper examines where we were, where we are and projections as to where treatment of HIV/AIDS is headed.

Chapter Two describes the datasets that were used for the analyses. The primary database utilized was collected by myself from an outpatient HIV clinic. The data included dates from 1984 until the present. The second database was from the Multicenter AIDS Cohort Study (MACS) public dataset. The data from the MACS cover the time between 1984 and October 1992. Comparisons are made between both datasets.

Chapter Three discusses where we were. Before the first anti-HIV drugs (called antiretrovirals) were approved, there was no treatment to slow the progression of HIV. The first generation of antiretrovirals, reverse transcriptase inhibitors such as AZT (zidovudine), DDI (didanosine), DDC (zalcitabine), and D4T (stavudine) provided the first treatment for HIV. The first clinical trials showed that these antiretrovirals had a significant impact on increasing patient survival. The trials also showed that patients on these drugs had increased CD4+ T cell counts. Chapter Three examines the distributions of CD4 T cell counts. The results show that the estimated distributions of CD4 T cell counts are distinctly non-Gaussian. Thus distributional assumptions regarding CD4 T cell counts must be taken, into account when performing analyses with this marker. The results also show the estimated CD4 T cell distributions for each disease stage: asymptomatic, symptomatic and AIDS are non-Gaussian. Interestingly, the distribution of CD4 T cell counts for the asymptomatic period is significantly below that of the CD4 T cell distribution for the uninfected population suggesting that even in patients with no outward symptoms of HIV infection, there exists high levels of immunosuppression.

Chapter Four discusses where we are at present. HIV quickly grew resistant to reverse transcriptase inhibitors which were given sequentially as mono or dual therapy. As resistance grew, the positive effects of the reverse transcriptase inhibitors on CD4 T cell counts and survival dissipated. As the old era faded a new era characterized by a new class of drugs and new technology changed the way that we treat HIV-infected patients. Viral load assays were able to quantify the levels of HIV RNA in the blood. By quantifying the viral load, one now had a faster, more direct way to test antiretroviral regimen efficacy. Protease inhibitors, which attacked a different region of HIV than reverse transcriptase inhibitors, when used in combination with other antiretroviral agents were found to dramatically and significantly reduce the HIV RNA levels in the blood. Patients also experienced significant increases in CD4 T cell counts. For the first time in the epidemic, there was hope. It was hypothesized that with HAART, viral levels could be kept so low that the immune system as measured by CD4 T cell counts would be able to recover. If these viral levels could be kept low enough, it would be possible for the immune system to eradicate the virus. The hypothesis of immune reconstitution, that is bringing CD4 T cell counts up to levels seen in uninfected patients, is tested in Chapter Four. It was found that for these patients, there was not enough of a CD4 T cell increase to be consistent with the hypothesis of immune reconstitution.

In Chapter Five, the effectiveness of long-term HAART is analyzed. Survival analysis was conducted on 213 patients on long-term HAART. The primary endpoint was presence of an AIDS defining illness. A high level of clinical failure, or progression to an endpoint, was found.

Chapter Six yields insights into where we are going. New technology such as viral genotypic testing, that looks at the genetic structure of HIV and determines where mutations have occurred, has shown that HIV is capable of producing resistance mutations that confer multiple drug resistance. This section looks at resistance issues and speculates, ceterus parabis, where the state of HIV is going. This section first addresses viral genotype and the correlates of viral load and disease progression. A second analysis looks at patients who have failed their primary attempts at HAART and subsequent salvage therapy. It was found that salvage regimens, efforts to control viral replication through the administration of different combinations of antiretrovirals, were not effective in 90 percent of the population in controlling viral replication. Thus, primary attempts at therapy offer the best change of viral suppression and delay of disease progression. Documentation of transmission of drug-resistant virus suggests that the public health crisis of HIV is far from over. Drug resistant HIV can sustain the epidemic and hamper our efforts to treat HIV infection. The data presented suggest that the decrease in the morbidity and mortality due to HIV/AIDS is transient. Deaths due to HIV will increase and public health officials must prepare for this eventuality unless new treatments become available. These results also underscore the importance of the vaccine effort.

The final chapter looks at the economic issues related to HIV. The direct and indirect costs of treating HIV/AIDS are very high. For the first time in the epidemic, there exists treatment that can actually slow disease progression. The direct costs for HAART are estimated. It is estimated that the direct lifetime costs for treating each HIV infected patient with HAART is between $353,000 to $598,000 depending on how long HAART prolongs life. If one looks at the incremental cost per year of life saved it is only $101,000. This is comparable with the incremental costs per year of life saved from coronary artery bypass surgery.

Policy makers need to be aware that although HAART can delay disease progression, it is not a cure and HIV is not over. The results presented here suggest that the decreases in the morbidity and mortality due to HIV are transient. Policymakers need to be prepared for the eventual increase in AIDS incidence and mortality. Costs associated with HIV/AIDS are also projected to increase. The cost savings seen recently have been from the dramatic decreases in the incidence of AIDS defining opportunistic infections. As patients who have been on HAART the longest start to progress to AIDS, policymakers and insurance companies will find that the cost of treating HIV/AIDS will increase.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Earthquake early warning (EEW) systems have been rapidly developing over the past decade. Japan Meteorological Agency (JMA) has an EEW system that was operating during the 2011 M9 Tohoku earthquake in Japan, and this increased the awareness of EEW systems around the world. While longer-time earthquake prediction still faces many challenges to be practical, the availability of shorter-time EEW opens up a new door for earthquake loss mitigation. After an earthquake fault begins rupturing, an EEW system utilizes the first few seconds of recorded seismic waveform data to quickly predict the hypocenter location, magnitude, origin time and the expected shaking intensity level around the region. This early warning information is broadcast to different sites before the strong shaking arrives. The warning lead time of such a system is short, typically a few seconds to a minute or so, and the information is uncertain. These factors limit human intervention to activate mitigation actions and this must be addressed for engineering applications of EEW. This study applies a Bayesian probabilistic approach along with machine learning techniques and decision theories from economics to improve different aspects of EEW operation, including extending it to engineering applications.

Existing EEW systems are often based on a deterministic approach. Often, they assume that only a single event occurs within a short period of time, which led to many false alarms after the Tohoku earthquake in Japan. This study develops a probability-based EEW algorithm based on an existing deterministic model to extend the EEW system to the case of concurrent events, which are often observed during the aftershock sequence after a large earthquake.

To overcome the challenge of uncertain information and short lead time of EEW, this study also develops an earthquake probability-based automated decision-making (ePAD) framework to make robust decision for EEW mitigation applications. A cost-benefit model that can capture the uncertainties in EEW information and the decision process is used. This approach is called the Performance-Based Earthquake Early Warning, which is based on the PEER Performance-Based Earthquake Engineering method. Use of surrogate models is suggested to improve computational efficiency. Also, new models are proposed to add the influence of lead time into the cost-benefit analysis. For example, a value of information model is used to quantify the potential value of delaying the activation of a mitigation action for a possible reduction of the uncertainty of EEW information in the next update. Two practical examples, evacuation alert and elevator control, are studied to illustrate the ePAD framework. Potential advanced EEW applications, such as the case of multiple-action decisions and the synergy of EEW and structural health monitoring systems, are also discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work reports investigations upon weakly superconducting proximity effect bridges. These bridges, which exhibit the Josephson effects, are produced by bisecting a superconductor with a short (<1µ) region of material whose superconducting transition temperature is below that of the adjacent superconductors. These bridges are fabricated from layered refractory metal thin films whose transition temperature will depend upon the thickness ratio of the materials involved. The thickness ratio is changed in the area of the bridge to lower its transition temperature. This is done through novel photolithographic techniques described in the text, Chapter 2.

If two such proximity effect bridges are connected in parallel, they form a quantum interferometer. The maximum zero voltage current through this circuit is periodically modulated by the magnetic flux through the circuit. At a constant bias current, the modulation of the critical current produces a modulation in the dc voltage across the bridge. This change in dc voltage has been found to be the result of a change in the internal dissipation in the device. A simple model using lumped circuit theory and treating the bridges as quantum oscillators of frequency ω = 2eV/h, where V is the time average voltage across the device, has been found to adequately describe the observed voltage modulation.

The quantum interferometers have been converted to a galvanometer through the inclusion of an integral thin film current path which couples magnetic flux through the interferometer. Thus a change in signal current produces a change in the voltage across the interferometer at a constant bias current. This work is described in Chapter 3 of the text.

The sensitivity of any device incorporating proximity effect bridges will ultimately be determined by the fluctuations in their electrical parameters. He have measured the spectral power density of the voltage fluctuations in proximity effect bridges using a room temperature electronics and a liquid helium temperature transformer to match the very low (~ 0.1 Ω) impedances characteristic of these devices.

We find the voltage noise to agree quite well with that predicted by phonon noise in the normal conduction through the bridge plus a contribution from the superconducting pair current through the bridge which is proportional to the ratios of this current to the time average voltage across the bridge. The total voltage fluctuations are given by <V^2(f ) > = 4kTR^2_d I/V where R_d is the dynamic resistance, I the total current, and V the voltage across the bridge . An additional noise source appears with a strong 1/f^(n) dependence , 1.5 < n < 2, if the bridges are fabricated upon a glass substrate. This excess noise, attributed to thermodynamic temperature fluctuations in the volume of the bridge, increases dramatically on a glass substrate due to the greatly diminished thermal diffusivity of the glass as compared to sapphire.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Surface plasma waves arise from the collective oscillations of billions of electrons at the surface of a metal in unison. The simplest way to quantize these waves is by direct analogy to electromagnetic fields in free space, with the surface plasmon, the quantum of the surface plasma wave, playing the same role as the photon. It follows that surface plasmons should exhibit all of the same quantum phenomena that photons do, including quantum interference and entanglement.

Unlike photons, however, surface plasmons suffer strong losses that arise from the scattering of free electrons from other electrons, phonons, and surfaces. Under some circumstances, these interactions might also cause “pure dephasing,” which entails a loss of coherence without absorption. Quantum descriptions of plasmons usually do not account for these effects explicitly, and sometimes ignore them altogether. In light of this extra microscopic complexity, it is necessary for experiments to test quantum models of surface plasmons.

In this thesis, I describe two such tests that my collaborators and I performed. The first was a plasmonic version of the Hong-Ou-Mandel experiment, in which we observed two-particle quantum interference between plasmons with a visibility of 93 ± 1%. This measurement confirms that surface plasmons faithfully reproduce this effect with the same visibility and mutual coherence time, to within measurement error, as in the photonic case.

The second experiment demonstrated path entanglement between surface plasmons with a visibility of 95 ± 2%, confirming that a path-entangled state can indeed survive without measurable decoherence. This measurement suggests that elastic scattering mechanisms of the type that might cause pure dephasing must have been weak enough not to significantly perturb the state of the metal under the experimental conditions we investigated.

These two experiments add quantum interference and path entanglement to a growing list of quantum phenomena that surface plasmons appear to exhibit just as clearly as photons, confirming the predictions of the simplest quantum models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The long- and short-period body waves of a number of moderate earthquakes occurring in central and southern California recorded at regional (200-1400 km) and teleseismic (> 30°) distances are modeled to obtain the source parameters-focal mechanism, depth, seismic moment, and source time history. The modeling is done in the time domain using a forward modeling technique based on ray summation. A simple layer over a half space velocity model is used with additional layers being added if necessary-for example, in a basin with a low velocity lid.

The earthquakes studied fall into two geographic regions: 1) the western Transverse Ranges, and 2) the western Imperial Valley. Earthquakes in the western Transverse Ranges include the 1987 Whittier Narrows earthquake, several offshore earthquakes that occurred between 1969 and 1981, and aftershocks to the 1983 Coalinga earthquake (these actually occurred north of the Transverse Ranges but share many characteristics with those that occurred there). These earthquakes are predominantly thrust faulting events with the average strike being east-west, but with many variations. Of the six earthquakes which had sufficient short-period data to accurately determine the source time history, five were complex events. That is, they could not be modeled as a simple point source, but consisted of two or more subevents. The subevents of the Whittier Narrows earthquake had different focal mechanisms. In the other cases, the subevents appear to be the same, but small variations could not be ruled out.

The recent Imperial Valley earthquakes modeled include the two 1987 Superstition Hills earthquakes and the 1969 Coyote Mountain earthquake. All are strike-slip events, and the second 1987 earthquake is a complex event With non-identical subevents.

In all the earthquakes studied, and particularly the thrust events, constraining the source parameters required modeling several phases and distance ranges. Teleseismic P waves could provide only approximate solutions. P_(nl) waves were probably the most useful phase in determining the focal mechanism, with additional constraints supplied by the SH waves when available. Contamination of the SH waves by shear-coupled PL waves was a frequent problem. Short-period data were needed to obtain the source time function.

In addition to the earthquakes mentioned above, several historic earthquakes were also studied. Earthquakes that occurred before the existence of dense local and worldwide networks are difficult to model due to the sparse data set. It has been noticed that earthquakes that occur near each other often produce similar waveforms implying similar source parameters. By comparing recent well studied earthquakes to historic earthquakes in the same region, better constraints can be placed on the source parameters of the historic events.

The Lompoc earthquake (M=7) of 1927 is the largest offshore earthquake to occur in California this century. By direct comparison of waveforms and amplitudes with the Coalinga and Santa Lucia Banks earthquakes, the focal mechanism (thrust faulting on a northwest striking fault) and long-period seismic moment (10^(26) dyne cm) can be obtained. The S-P travel times are consistent with an offshore location, rather than one in the Hosgri fault zone.

Historic earthquakes in the western Imperial Valley were also studied. These events include the 1942 and 1954 earthquakes. The earthquakes were relocated by comparing S-P and R-S times to recent earthquakes. It was found that only minor changes in the epicenters were required but that the Coyote Mountain earthquake may have been more severely mislocated. The waveforms as expected indicated that all the events were strike-slip. Moment estimates were obtained by comparing the amplitudes of recent and historic events at stations which recorded both. The 1942 event was smaller than the 1968 Borrego Mountain earthquake although some previous studies suggested the reverse. The 1954 and 1937 earthquakes had moments close to the expected value. An aftershock of the 1942 earthquake appears to be larger than previously thought.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Non-classical properties and quantum interference (QI) in two-photon excitation of a three level atom (|1〉), |2〉, |3〉) in a ladder configuration, illuminated by multiple fields in non-classical (squeezed) and/or classical (coherent) states, is studied. Fundamentally new effects associated with quantum correlations in the squeezed fields and QI due to multiple excitation pathways have been observed. Theoretical studies and extrapolations of these findings have revealed possible applications which are far beyond any current capabilities, including ultrafast nonlinear mixing, ultrafast homodyne detection and frequency metrology. The atom used throughout the experiments was Cesium, which was magneto-optically trapped in a vapor cell to produce a Doppler-free sample. For the first part of the work the |1〉 → |2〉 → |3〉 transition (corresponding to the 6S1/2F = 4 → 6P3/2F' = 5 → 6D5/2F" = 6 transition) was excited by using the quantum-correlated signal (Ɛs) and idler (Ɛi) output fields of a subthreshold non-degenerate optical parametric oscillator, which was tuned so that the signal and idler fields were resonant with the |1〉 → |2〉 and |2〉 → |3〉 transitions, respectively. In contrast to excitation with classical fields for which the excitation rate as a function of intensity has always an exponent greater than or equal to two, excitation with squeezed-fields has been theoretically predicted to have an exponent that approaches unity for small enough intensities. This was verified experimentally by probing the exponent down to a slope of 1.3, demonstrating for the first time a purely non-classical effect associated with the interaction of squeezed fields and atoms. In the second part excitation of the two-photon transition by three phase coherent fields Ɛ1 , Ɛ2 and Ɛ0, resonant with the dipole |1〉 → |2〉 and |2〉 → |3〉 and quadrupole |1〉 → |3〉 transitions, respectively, is studied. QI in the excited state population is observed due to two alternative excitation pathways. This is equivalent to nonlinear mixing of the three excitation fields by the atom. Realizing that in the experiment the three fields are spaced in frequency over a range of 25 THz, and extending this scheme to other energy triplets and atoms, leads to the discovery that ranges up to 100's of THz can be bridged in a single mixing step. Motivated by these results, a master equation model has been developed for the system and its properties have been extensively studied.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fast radio bursts (FRBs), a novel type of radio pulse, whose physics is not yet understood at all. Only a handful of FRBs had been detected when we started this project. Taking account of the scant observations, we put physical constraints on FRBs. We excluded proposals of a galactic origin for their extraordinarily high dispersion measures (DM), in particular stellar coronas and HII regions. Therefore our work supports an extragalactic origin for FRBs. We show that the resolved scattering tail of FRB 110220 is unlikely to be due to propagation through the intergalactic plasma. Instead the scattering is probably caused by the interstellar medium in the FRB's host galaxy, and indicates that this burst sits in the central region of that galaxy. Pulse durations of order $\ms$ constrain source sizes of FRBs implying enormous brightness temperatures and thus coherent emission. Electric fields near FRBs at cosmological distances would be so strong that they could accelerate free electrons from rest to relativistic energies in a single wave period. When we worked on FRBs, it was unclear whether they were genuine astronomical signals as distinct from `perytons', clearly terrestrial radio bursts, sharing some common properties with FRBs. Recently, in April 2015, astronomers discovered that perytons were emitted by microwave ovens. Radio chirps similar to FRBs were emitted when their doors opened while they were still heating. Evidence for the astronomical nature of FRBs has strengthened since our paper was published. Some bursts have been found to show linear and circular polarizations and Faraday rotation of the linear polarization has also been detected. I hope to resume working on FRBs in the near future. But after we completed our FRB paper, I decided to pause this project because of the lack of observational constraints.

The pulsar triple system, J0733+1715, has its orbital parameters fitted to high accuracy owing to the precise timing of the central $\ms$ pulsar. The two orbits are highly hierarchical, namely $P_{\mathrm{orb,1}}\ll P_{\mathrm{orb,2}}$, where 1 and 2 label the inner and outer white dwarf (WD) companions respectively. Moreover, their orbital planes almost coincide, providing a unique opportunity to study secular interaction associated purely with eccentricity beyond the solar system. Secular interaction only involves effect averaged over many orbits. Thus each companion can be represented by an elliptical wire with its mass distributed inversely proportional to its local orbital speed. Generally there exists a mutual torque, which vanishes only when their apsidal lines are parallel or anti-parallel. To maintain either mode, the eccentricity ratio, $e_1/e_2$, must be of the proper value, so that both apsidal lines precess together. For J0733+1715, $e_1\ll e_2$ for the parallel mode, while $e_1\gg e_2$ for the anti-parallel one. We show that the former precesses $\sim 10$ times slower than the latter. Currently the system is dominated by the parallel mode. Although only a little anti-parallel mode survives, both eccentricities especially $e_1$ oscillate on $\sim 10^3\yr$ timescale. Detectable changes would occur within $\sim 1\yr$. We demonstrate that the anti-parallel mode gets damped $\sim 10^4$ times faster than its parallel brother by any dissipative process diminishing $e_1$. If it is the tidal damping in the inner WD, we proceed to estimate its tidal quantity parameter ($Q$) to be $\sim 10^6$, which was poorly constrained by observations. However, tidal damping may also happen during the preceding low-mass X-ray binary (LMXB) phase or hydrogen thermal nuclear flashes. But, in both cases, the inner companion fills its Roche lobe and probably suffers mass/angular momentum loss, which might cause $e_1$ to grow rather than decay.

Several pairs of solar system satellites occupy mean motion resonances (MMRs). We divide these into two groups according to their proximity to exact resonance. Proximity is measured by the existence of a separatrix in phase space. MMRs between Io-Europa, Europa-Ganymede and Enceladus-Dione are too distant from exact resonance for a separatrix to appear. A separatrix is present only in the phase spaces of the Mimas-Tethys and Titan-Hyperion MMRs and their resonant arguments are the only ones to exhibit substantial librations. When a separatrix is present, tidal damping of eccentricity or inclination excites overstable librations that can lead to passage through resonance on the damping timescale. However, after investigation, we conclude that the librations in the Mimas-Tethys and Titan-Hyperion MMRs are fossils and do not result from overstability.

Rubble piles are common in the solar system. Monolithic elements touch their neighbors in small localized areas. Voids occupy a significant fraction of the volume. In a fluid-free environment, heat cannot conduct through voids; only radiation can transfer energy across them. We model the effective thermal conductivity of a rubble pile and show that it is proportional the square root of the pressure, $P$, for $P\leq \epsy^3\mu$ where $\epsy$ is the material's yield strain and $\mu$ its shear modulus. Our model provides an excellent fit to the depth dependence of the thermal conductivity in the top $140\,\mathrm{cm}$ of the lunar regolith. It also offers an explanation for the low thermal inertias of rocky asteroids and icy satellites. Lastly, we discuss how rubble piles slow down the cooling of small bodies such as asteroids.

Electromagnetic (EM) follow-up observations of gravitational wave (GW) events will help shed light on the nature of the sources, and more can be learned if the EM follow-ups can start as soon as the GW event becomes observable. In this paper, we propose a computationally efficient time-domain algorithm capable of detecting gravitational waves (GWs) from coalescing binaries of compact objects with nearly zero time delay. In case when the signal is strong enough, our algorithm also has the flexibility to trigger EM observation {\it before} the merger. The key to the efficiency of our algorithm arises from the use of chains of so-called Infinite Impulse Response (IIR) filters, which filter time-series data recursively. Computational cost is further reduced by a template interpolation technique that requires filtering to be done only for a much coarser template bank than otherwise required to sufficiently recover optimal signal-to-noise ratio. Towards future detectors with sensitivity extending to lower frequencies, our algorithm's computational cost is shown to increase rather insignificantly compared to the conventional time-domain correlation method. Moreover, at latencies of less than hundreds to thousands of seconds, this method is expected to be computationally more efficient than the straightforward frequency-domain method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A study is made of the accuracy of electronic digital computer calculations of ground displacement and response spectra from strong-motion earthquake accelerograms. This involves an investigation of methods of the preparatory reduction of accelerograms into a form useful for the digital computation and of the accuracy of subsequent digital calculations. Various checks are made for both the ground displacement and response spectra results, and it is concluded that the main errors are those involved in digitizing the original record. Differences resulting from various investigators digitizing the same experimental record may become as large as 100% of the maximum computed ground displacements. The spread of the results of ground displacement calculations is greater than that of the response spectra calculations. Standardized methods of adjustment and calculation are recommended, to minimize such errors.

Studies are made of the spread of response spectral values about their mean. The distribution is investigated experimentally by Monte Carlo techniques using an electric analog system with white noise excitation, and histograms are presented indicating the dependence of the distribution on the damping and period of the structure. Approximate distributions are obtained analytically by confirming and extending existing results with accurate digital computer calculations. A comparison of the experimental and analytical approaches indicates good agreement for low damping values where the approximations are valid. A family of distribution curves to be used in conjunction with existing average spectra is presented. The combination of analog and digital computations used with Monte Carlo techniques is a promising approach to the statistical problems of earthquake engineering.

Methods of analysis of very small earthquake ground motion records obtained simultaneously at different sites are discussed. The advantages of Fourier spectrum analysis for certain types of studies and methods of calculation of Fourier spectra are presented. The digitizing and analysis of several earthquake records is described and checks are made of the dependence of results on digitizing procedure, earthquake duration and integration step length. Possible dangers of a direct ratio comparison of Fourier spectra curves are pointed out and the necessity for some type of smoothing procedure before comparison is established. A standard method of analysis for the study of comparative ground motion at different sites is recommended.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

I. Studies on Nicotinamide Adenine Dinucleotide Glycohydrase (NADase)

NADase, like tyrosinase and L-amino acid oxidase, is not present in two day old cultures of wild type Neurospora, but it is coinduced with those two enzymes during starvation in phosphate buffer. The induction of NADase, like tyrosinase, is inhibited by puromycin. The induction of all three enzymes is inhibited by actinomycin D. These results suggest that NADase is synthesized de novo during induction as has been shown directly for tyrosinase. NADase induction differs in being inhibited by certain amino acids.

The tyrosinaseless mutant ty-1 contains a non-dialyzable, heat labile inhibitor of NADase. A new mutant, P110A, synthesizes NADase and L-amino acid oxidase while growing. A second strain, pe, fl;cot, makes NADase while growing. Both strains can be induced to make the other enzymes. These two strains prove that the control of these three enzymes is divisible. The strain P110A makes NADase even when grown in the presence of Tween 80. The synthesis of both NADase and L-amino acid oxidase by P110A is suppressed by complete medium. The theory of control of the synthesis of the enzymes is discussed.

II. Studies with EDTA

Neurospora tyrosinase contains copper but, unlike other phenol oxidases, this copper has never been removed reversibly. It was thought that the apo-enzyme might be made in vivo in the absence of copper. Therefore cultures were treated with EDTA to remove copper before the enzyme was induced. Although no apo-tyrosinase was detected, new information on the induction process was obtained.

A treatment of Neurospora with 0.5% EDTA pH 7, inhibits the subsequent induction during starvation in phosphate buffer of tyrosinase, L-amino acid oxidase and NADase. The inhibition of tyrosinase and L-amino acid oxidase induction is completely reversed by adding 5 x 10-5M CaCl2, 5 x 10-4M CuSO4, and a mixture of L-amino acids (2 x 10-3M each) to the buffer. Tyrosinase induction is also fully restored by 5 x 10-4M CaCl2 and amino acids. As yet NADase has been only partially restored.

The copper probably acts by sequestering EDTA left in the mycelium and may be replaced by nickel. The EDTA apparently removes some calcium from the mycelium, which the added calcium replaces. Magnesium cannot replace calcium. The amino acids probably replace endogenous amino acids lost to the buffer after the EDTA treatment.

The EDTA treatment also increases permeability, thereby increasing the sensitivity of induction to inhibition by actinomycin D and allowing cell contents to be lost to the induction buffer. EDTA treatment also inhibits the uptake of exogenous amino acids and their incorporation into proteins.

The lag period that precedes the first appearance of tyrosinase is demonstrated to be a separate dynamic phase of induction. It requires oxygen. It is inhibited by EDTA, but can be completed after EDTA treatment in the presence of 5 x 10-5M CaCl2 alone, although no tyrosinase is synthesized under these conditions.

The time course of induction has an early exponential phase suggesting an autocatalytic mechanism of induction.

The mode of action of EDTA, the process of induction and the kinetics of induction are discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study is concerned with some of the properties of roll waves that develop naturally from a turbulent uniform flow in a wide rectangular channel on a constant steep slope . The wave properties considered were depth at the wave crest, depth at the wave trough, wave period, and wave velocity . The primary focus was on the mean values and standard deviations of the crest depths and wave periods at a given station and how these quantities varied with distance along the channel.

The wave properties were measured in a laboratory channel in which roll waves developed naturally from a uniform flow . The Froude number F (F = un/√ghn, un = normal velocity , hn = normal depth, g =acceleration of gravity) ranged from 3. 4 to 6. 0 for channel slopes So of . 05 and . 12 respectively . In the initial phase of their development the roll waves appeared as small amplitude waves with a continuous water surface profile . These small amplitude waves subsequently developed into large amplitude shock waves. Shock waves were found to overtake and combine with other shock waves with the result that the crest depth of the combined wave was larger than the crest depths before the overtake. Once roll waves began to develop, the mean value of the crest depths hnmax increased with distance . Once the shock waves began to overtake, the mean wave period Tav increased approximately linearly with distance.

For a given Froude number and channel slope the observed quantities h-max/hn , T' (T' = So Tav √g/hn), and the standard deviations of h-max/hn and T', could be expressed as unique functions of l/hn (l = distance from beginning of channel) for the two-fold change in hn occurring in the observed flows . A given value of h-max/hn occurred at smaller values of l/hn as the Froude number was increased. For a given value of h /hh-max/hn the growth rate of δh-max/h-maxδl of the shock waves increased as the Froude number was increased.

A laboratory channel was also used to measure the wave properties of periodic permanent roll waves. For a given Froude number and channel slope the h-max/hn vs. T' relation did not agree with a theory in which the weight of the shock front was neglected. After the theory was modified to include this weight, the observed values of h-max/hn were within an average of 6.5 percent of the predicted values, and the maximum discrepancy was 13.5 percent.

For h-max/hn sufficiently large (h-max/hn > approximately 1.5) it was found that the h-max/hn vs. T' relation for natural roll waves was practically identical to the h-max/hn vs. T' relation for periodic permanent roll waves at the same Froude number and slope. As a result of this correspondence between periodic and natural roll waves, the growth rate δh-max/h-maxδl of shock waves was predicted to depend on the channel slope, and this slope dependence was observed in the experiments.