14 resultados para Statistical peak moments

em CaltechTHESIS


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis explores the problem of mobile robot navigation in dense human crowds. We begin by considering a fundamental impediment to classical motion planning algorithms called the freezing robot problem: once the environment surpasses a certain level of complexity, the planner decides that all forward paths are unsafe, and the robot freezes in place (or performs unnecessary maneuvers) to avoid collisions. Since a feasible path typically exists, this behavior is suboptimal. Existing approaches have focused on reducing predictive uncertainty by employing higher fidelity individual dynamics models or heuristically limiting the individual predictive covariance to prevent overcautious navigation. We demonstrate that both the individual prediction and the individual predictive uncertainty have little to do with this undesirable navigation behavior. Additionally, we provide evidence that dynamic agents are able to navigate in dense crowds by engaging in joint collision avoidance, cooperatively making room to create feasible trajectories. We accordingly develop interacting Gaussian processes, a prediction density that captures cooperative collision avoidance, and a "multiple goal" extension that models the goal driven nature of human decision making. Navigation naturally emerges as a statistic of this distribution.

Most importantly, we empirically validate our models in the Chandler dining hall at Caltech during peak hours, and in the process, carry out the first extensive quantitative study of robot navigation in dense human crowds (collecting data on 488 runs). The multiple goal interacting Gaussian processes algorithm performs comparably with human teleoperators in crowd densities nearing 1 person/m2, while a state of the art noncooperative planner exhibits unsafe behavior more than 3 times as often as the multiple goal extension, and twice as often as the basic interacting Gaussian process approach. Furthermore, a reactive planner based on the widely used dynamic window approach proves insufficient for crowd densities above 0.55 people/m2. We also show that our noncooperative planner or our reactive planner capture the salient characteristics of nearly any dynamic navigation algorithm. For inclusive validation purposes, we show that either our non-interacting planner or our reactive planner captures the salient characteristics of nearly any existing dynamic navigation algorithm. Based on these experimental results and theoretical observations, we conclude that a cooperation model is critical for safe and efficient robot navigation in dense human crowds.

Finally, we produce a large database of ground truth pedestrian crowd data. We make this ground truth database publicly available for further scientific study of crowd prediction models, learning from demonstration algorithms, and human robot interaction models in general.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A central objective in signal processing is to infer meaningful information from a set of measurements or data. While most signal models have an overdetermined structure (the number of unknowns less than the number of equations), traditionally very few statistical estimation problems have considered a data model which is underdetermined (number of unknowns more than the number of equations). However, in recent times, an explosion of theoretical and computational methods have been developed primarily to study underdetermined systems by imposing sparsity on the unknown variables. This is motivated by the observation that inspite of the huge volume of data that arises in sensor networks, genomics, imaging, particle physics, web search etc., their information content is often much smaller compared to the number of raw measurements. This has given rise to the possibility of reducing the number of measurements by down sampling the data, which automatically gives rise to underdetermined systems.

In this thesis, we provide new directions for estimation in an underdetermined system, both for a class of parameter estimation problems and also for the problem of sparse recovery in compressive sensing. There are two main contributions of the thesis: design of new sampling and statistical estimation algorithms for array processing, and development of improved guarantees for sparse reconstruction by introducing a statistical framework to the recovery problem.

We consider underdetermined observation models in array processing where the number of unknown sources simultaneously received by the array can be considerably larger than the number of physical sensors. We study new sparse spatial sampling schemes (array geometries) as well as propose new recovery algorithms that can exploit priors on the unknown signals and unambiguously identify all the sources. The proposed sampling structure is generic enough to be extended to multiple dimensions as well as to exploit different kinds of priors in the model such as correlation, higher order moments, etc.

Recognizing the role of correlation priors and suitable sampling schemes for underdetermined estimation in array processing, we introduce a correlation aware framework for recovering sparse support in compressive sensing. We show that it is possible to strictly increase the size of the recoverable sparse support using this framework provided the measurement matrix is suitably designed. The proposed nested and coprime arrays are shown to be appropriate candidates in this regard. We also provide new guarantees for convex and greedy formulations of the support recovery problem and demonstrate that it is possible to strictly improve upon existing guarantees.

This new paradigm of underdetermined estimation that explicitly establishes the fundamental interplay between sampling, statistical priors and the underlying sparsity, leads to exciting future research directions in a variety of application areas, and also gives rise to new questions that can lead to stand-alone theoretical results in their own right.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis, we develop an efficient collapse prediction model, the PFA (Peak Filtered Acceleration) model, for buildings subjected to different types of ground motions.

For the structural system, the PFA model covers modern steel and reinforced concrete moment-resisting frame buildings (potentially reinforced concrete shear wall buildings). For ground motions, the PFA model covers ramp-pulse-like ground motions, long-period ground motions, and short-period ground motions.

To predict whether a building will collapse in response to a given ground motion, we first extract long-period components from the ground motion using a Butterworth low-pass filter with suggested order and cutoff frequency. The order depends on the type of ground motion, and the cutoff frequency depends on the building’s natural frequency and ductility. We then compare the filtered acceleration time history with the capacity of the building. The capacity of the building is a constant for 2-dimentional buildings and a limit domain for 3-dimentional buildings. If the filtered acceleration exceeds the building’s capacity, the building is predicted to collapse. Otherwise, it is expected to survive the ground motion.

The parameters used in PFA model, which include fundamental period, global ductility and lateral capacity, can be obtained either from numerical analysis or interpolation based on the reference building system proposed in this thesis.

The PFA collapse prediction model greatly reduces computational complexity while archiving good accuracy. It is verified by FEM simulations of 13 frame building models and 150 ground motion records.

Based on the developed collapse prediction model, we propose to use PFA (Peak Filtered Acceleration) as a new ground motion intensity measure for collapse prediction. We compare PFA with traditional intensity measures PGA, PGV, PGD, and Sa in collapse prediction and find that PFA has the best performance among all the intensity measures.

We also provide a close form in term of a vector intensity measure (PGV, PGD) of the PFA collapse prediction model for practical collapse risk assessment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work, computationally efficient approximate methods are developed for analyzing uncertain dynamical systems. Uncertainties in both the excitation and the modeling are considered and examples are presented illustrating the accuracy of the proposed approximations.

For nonlinear systems under uncertain excitation, methods are developed to approximate the stationary probability density function and statistical quantities of interest. The methods are based on approximating solutions to the Fokker-Planck equation for the system and differ from traditional methods in which approximate solutions to stochastic differential equations are found. The new methods require little computational effort and examples are presented for which the accuracy of the proposed approximations compare favorably to results obtained by existing methods. The most significant improvements are made in approximating quantities related to the extreme values of the response, such as expected outcrossing rates, which are crucial for evaluating the reliability of the system.

Laplace's method of asymptotic approximation is applied to approximate the probability integrals which arise when analyzing systems with modeling uncertainty. The asymptotic approximation reduces the problem of evaluating a multidimensional integral to solving a minimization problem and the results become asymptotically exact as the uncertainty in the modeling goes to zero. The method is found to provide good approximations for the moments and outcrossing rates for systems with uncertain parameters under stochastic excitation, even when there is a large amount of uncertainty in the parameters. The method is also applied to classical reliability integrals, providing approximations in both the transformed (independently, normally distributed) variables and the original variables. In the transformed variables, the asymptotic approximation yields a very simple formula for approximating the value of SORM integrals. In many cases, it may be computationally expensive to transform the variables, and an approximation is also developed in the original variables. Examples are presented illustrating the accuracy of the approximations and results are compared with existing approximations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The epidemic of HIV/AIDS in the United States is constantly changing and evolving, starting from patient zero to now an estimated 650,000 to 900,000 Americans infected. The nature and course of HIV changed dramatically with the introduction of antiretrovirals. This discourse examines many different facets of HIV from the beginning where there wasn't any treatment for HIV until the present era of highly active antiretroviral therapy (HAART). By utilizing statistical analysis of clinical data, this paper examines where we were, where we are and projections as to where treatment of HIV/AIDS is headed.

Chapter Two describes the datasets that were used for the analyses. The primary database utilized was collected by myself from an outpatient HIV clinic. The data included dates from 1984 until the present. The second database was from the Multicenter AIDS Cohort Study (MACS) public dataset. The data from the MACS cover the time between 1984 and October 1992. Comparisons are made between both datasets.

Chapter Three discusses where we were. Before the first anti-HIV drugs (called antiretrovirals) were approved, there was no treatment to slow the progression of HIV. The first generation of antiretrovirals, reverse transcriptase inhibitors such as AZT (zidovudine), DDI (didanosine), DDC (zalcitabine), and D4T (stavudine) provided the first treatment for HIV. The first clinical trials showed that these antiretrovirals had a significant impact on increasing patient survival. The trials also showed that patients on these drugs had increased CD4+ T cell counts. Chapter Three examines the distributions of CD4 T cell counts. The results show that the estimated distributions of CD4 T cell counts are distinctly non-Gaussian. Thus distributional assumptions regarding CD4 T cell counts must be taken, into account when performing analyses with this marker. The results also show the estimated CD4 T cell distributions for each disease stage: asymptomatic, symptomatic and AIDS are non-Gaussian. Interestingly, the distribution of CD4 T cell counts for the asymptomatic period is significantly below that of the CD4 T cell distribution for the uninfected population suggesting that even in patients with no outward symptoms of HIV infection, there exists high levels of immunosuppression.

Chapter Four discusses where we are at present. HIV quickly grew resistant to reverse transcriptase inhibitors which were given sequentially as mono or dual therapy. As resistance grew, the positive effects of the reverse transcriptase inhibitors on CD4 T cell counts and survival dissipated. As the old era faded a new era characterized by a new class of drugs and new technology changed the way that we treat HIV-infected patients. Viral load assays were able to quantify the levels of HIV RNA in the blood. By quantifying the viral load, one now had a faster, more direct way to test antiretroviral regimen efficacy. Protease inhibitors, which attacked a different region of HIV than reverse transcriptase inhibitors, when used in combination with other antiretroviral agents were found to dramatically and significantly reduce the HIV RNA levels in the blood. Patients also experienced significant increases in CD4 T cell counts. For the first time in the epidemic, there was hope. It was hypothesized that with HAART, viral levels could be kept so low that the immune system as measured by CD4 T cell counts would be able to recover. If these viral levels could be kept low enough, it would be possible for the immune system to eradicate the virus. The hypothesis of immune reconstitution, that is bringing CD4 T cell counts up to levels seen in uninfected patients, is tested in Chapter Four. It was found that for these patients, there was not enough of a CD4 T cell increase to be consistent with the hypothesis of immune reconstitution.

In Chapter Five, the effectiveness of long-term HAART is analyzed. Survival analysis was conducted on 213 patients on long-term HAART. The primary endpoint was presence of an AIDS defining illness. A high level of clinical failure, or progression to an endpoint, was found.

Chapter Six yields insights into where we are going. New technology such as viral genotypic testing, that looks at the genetic structure of HIV and determines where mutations have occurred, has shown that HIV is capable of producing resistance mutations that confer multiple drug resistance. This section looks at resistance issues and speculates, ceterus parabis, where the state of HIV is going. This section first addresses viral genotype and the correlates of viral load and disease progression. A second analysis looks at patients who have failed their primary attempts at HAART and subsequent salvage therapy. It was found that salvage regimens, efforts to control viral replication through the administration of different combinations of antiretrovirals, were not effective in 90 percent of the population in controlling viral replication. Thus, primary attempts at therapy offer the best change of viral suppression and delay of disease progression. Documentation of transmission of drug-resistant virus suggests that the public health crisis of HIV is far from over. Drug resistant HIV can sustain the epidemic and hamper our efforts to treat HIV infection. The data presented suggest that the decrease in the morbidity and mortality due to HIV/AIDS is transient. Deaths due to HIV will increase and public health officials must prepare for this eventuality unless new treatments become available. These results also underscore the importance of the vaccine effort.

The final chapter looks at the economic issues related to HIV. The direct and indirect costs of treating HIV/AIDS are very high. For the first time in the epidemic, there exists treatment that can actually slow disease progression. The direct costs for HAART are estimated. It is estimated that the direct lifetime costs for treating each HIV infected patient with HAART is between $353,000 to $598,000 depending on how long HAART prolongs life. If one looks at the incremental cost per year of life saved it is only $101,000. This is comparable with the incremental costs per year of life saved from coronary artery bypass surgery.

Policy makers need to be aware that although HAART can delay disease progression, it is not a cure and HIV is not over. The results presented here suggest that the decreases in the morbidity and mortality due to HIV are transient. Policymakers need to be prepared for the eventual increase in AIDS incidence and mortality. Costs associated with HIV/AIDS are also projected to increase. The cost savings seen recently have been from the dramatic decreases in the incidence of AIDS defining opportunistic infections. As patients who have been on HAART the longest start to progress to AIDS, policymakers and insurance companies will find that the cost of treating HIV/AIDS will increase.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the measurement of the Higgs Boson decaying into two photons the parametrization of an appropriate background model is essential for fitting the Higgs signal mass peak over a continuous background. This diphoton background modeling is crucial in the statistical process of calculating exclusion limits and the significance of observations in comparison to a background-only hypothesis. It is therefore ideal to obtain knowledge of the physical shape for the background mass distribution as the use of an improper function can lead to biases in the observed limits. Using an Information-Theoretic (I-T) approach for valid inference we apply Akaike Information Criterion (AIC) as a measure of the separation for a fitting model from the data. We then implement a multi-model inference ranking method to build a fit-model that closest represents the Standard Model background in 2013 diphoton data recorded by the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC). Potential applications and extensions of this model-selection technique are discussed with reference to CMS detector performance measurements as well as in potential physics analyses at future detectors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The magnetic moments of amorphous ternary alloys containing Pd, Co and Si in atomic concentrations corresponding to Pd_(80-x)Co_xSi_(20) in which x is 3, 5, 7, 9, 10 and 11, have been measured between 1.8 and 300°K and in magnetic fields up to 8.35 kOe. The alloys were obtained by rapid quenching of a liquid droplet and their structures were analyzed by X-ray diffraction. The measurements were made in a null-coil pendulum magnetometer in which the temperature could be varied continuously without immersing the sample in a cryogenic liquid. The alloys containing 9 at.% Co or less obeyed Curie's Law over certain temperature ranges, and had negligible permanent moments at room temperature. Those containing 10 and 11 at.% Co followed Curie's Law only above approximately 200°K and had significant permanent moments at room temperature. For all alloys, the moments calculated from Curie's Law were too high to be accounted for by the moments of individual Co atoms. To explain these findings, a model based on the existence of superparamagnetic clustering is proposed. The cluster sizes calculated from the model are consistent with the rapid onset of ferromagnetism in the alloys containing 10 and 11 at.% Co and with the magnetic moments in an alloy containing 7 at.% Co heat treated in such a manner as to contain a small amount of a crystalline phase. In alloys containing 7 at.% Co or less, a maximum in the magnetization vs temperature curve was observed around 10°K. This maximum was eliminated by cooling the alloy in a magnetic field, and an explanation for this observation is suggested.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The influence of composition on the structure and on the electric and magnetic properties of amorphous Pd-Mn-P and Pd-Co-P prepared by rapid quenching techniques were investigated in terms of (1) the 3d band filling of the first transition metal group, (2) the phosphorus concentration effect which acts as an electron donor and (3) the transition metal concentration.

The structure is essentially characterized by a set of polyhedra subunits essentially inverse to the packing of hard spheres in real space. Examination of computer generated distribution functions using Monte Carlo random statistical distribution of these polyhedra entities demonstrated tile reproducibility of the experimentally calculated atomic distribution function. As a result, several possible "structural parameters" are proposed such as: the number of nearest neighbors, the metal-to-metal distance, the degree of short-range order and the affinity between metal-metal and metal-metalloid. It is shown that the degree of disorder increases from Ni to Mn. Similar behavior is observed with increase in the phosphorus concentration.

The magnetic properties of Pd-Co-P alloys show that they are ferromagnetic with a Curie temperature between 272 and 399°K as the cobalt concentration increases from 15 to 50 at.%. Below 20 at.% Co the short-range exchange interactions which produce the ferromagnetism are unable to establish a long-range magnetic order and a peak in the magnetization shows up at the lowest temperature range . The electric resistivity measurements were performed from liquid helium temperatures up to the vicinity of the melting point (900°K). The thermomagnetic analysis was carried out under an applied field of 6.0 kOe. The electrical resistivity of Pd-Co-P shows the coexistence of a Kondo-like minimum with ferromagnetism. The minimum becomes less important as the transition metal concentration increases and the coefficients of ℓn T and T^2 become smaller and strongly temperature dependent. The negative magnetoresistivity is a strong indication of the existence of localized moment.

The temperature coefficient of resistivity which is positive for Pd- Fe-P, Pd-Ni-P, and Pd-Co-P becomes negative for Pd-Mn-P. It is possible to account for the negative temperature dependence by the localized spin fluctuation model and the high density of states at the Fermi energy which becomes maximum between Mn and Cr. The magnetization curves for Pd-Mn-P are typical of those resulting from the interplay of different exchange forces. The established relationship between susceptibility and resistivity confirms the localized spin fluctuation model. The magnetoresistivity of Pd-Mn-P could be interpreted in tenns of a short-range magnetic ordering that could arise from the Rudennan-Kittel type interactions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nearly all young stars are variable, with the variability traditionally divided into two classes: periodic variables and aperiodic or "irregular" variables. Periodic variables have been studied extensively, typically using periodograms, while aperiodic variables have received much less attention due to a lack of standard statistical tools. However, aperiodic variability can serve as a powerful probe of young star accretion physics and inner circumstellar disk structure. For my dissertation, I analyzed data from a large-scale, long-term survey of the nearby North America Nebula complex, using Palomar Transient Factory photometric time series collected on a nightly or every few night cadence over several years. This survey is the most thorough exploration of variability in a sample of thousands of young stars over time baselines of days to years, revealing a rich array of lightcurve shapes, amplitudes, and timescales.

I have constrained the timescale distribution of all young variables, periodic and aperiodic, on timescales from less than a day to ~100 days. I have shown that the distribution of timescales for aperiodic variables peaks at a few days, with relatively few (~15%) sources dominated by variability on tens of days or longer. My constraints on aperiodic timescale distributions are based on two new tools, magnitude- vs. time-difference (Δm-Δt) plots and peak-finding plots, for describing aperiodic lightcurves; this thesis provides simulations of their performance and presents recommendations on how to apply them to aperiodic signals in other time series data sets. In addition, I have measured the error introduced into colors or SEDs from combining photometry of variable sources taken at different epochs. These are the first quantitative results to be presented on the distributions in amplitude and time scale for young aperiodic variables, particularly those varying on timescales of weeks to months.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis consists of two parts. In Part I, we develop a multipole moment formalism in general relativity and use it to analyze the motion and precession of compact bodies. More specifically, the generic, vacuum, dynamical gravitational field of the exterior universe in the vicinity of a freely moving body is expanded in positive powers of the distance r away from the body's spatial origin (i.e., in the distance r from its timelike-geodesic world line). The expansion coefficients, called "external multipole moments,'' are defined covariantly in terms of the Riemann curvature tensor and its spatial derivatives evaluated on the body's central world line. In a carefully chosen class of de Donder coordinates, the expansion of the external field involves only integral powers of r ; no logarithmic terms occur. The expansion is used to derive higher-order corrections to previously known laws of motion and precession for black holes and other bodies. The resulting laws of motion and precession are expressed in terms of couplings of the time derivatives of the body's quadrupole and octopole moments to the external moments, i.e., to the external curvature and its gradient.

In part II, we study the interaction of magnetohydrodynamic (MHD) waves in a black-hole magnetosphere with the "dragging of inertial frames" effect of the hole's rotation - i.e., with the hole's "gravitomagnetic field." More specifically: we first rewrite the laws of perfect general relativistic magnetohydrodynamics (GRMHD) in 3+1 language in a general spacetime, in terms of quantities (magnetic field, flow velocity, ...) that would be measured by the ''fiducial observers” whose world lines are orthogonal to (arbitrarily chosen) hypersurfaces of constant time. We then specialize to a stationary spacetime and MHD flow with one arbitrary spatial symmetry (e.g., the stationary magnetosphere of a Kerr black hole); and for this spacetime we reduce the GRMHD equations to a set of algebraic equations. The general features of the resulting stationary, symmetric GRMHD magnetospheric solutions are discussed, including the Blandford-Znajek effect in which the gravitomagnetic field interacts with the magnetosphere to produce an outflowing jet. Then in a specific model spacetime with two spatial symmetries, which captures the key features of the Kerr geometry, we derive the GRMHD equations which govern weak, linealized perturbations of a stationary magnetosphere with outflowing jet. These perturbation equations are then Fourier analyzed in time t and in the symmetry coordinate x, and subsequently solved numerically. The numerical solutions describe the interaction of MHD waves with the gravitomagnetic field. It is found that, among other features, when an oscillatory external force is applied to the region of the magnetosphere where plasma (e+e-) is being created, the magnetosphere responds especially strongly at a particular, resonant, driving frequency. The resonant frequency is that for which the perturbations appear to be stationary (time independent) in the common rest frame of the freshly created plasma and the rotating magnetic field lines. The magnetosphere of a rotating black hole, when buffeted by nonaxisymmetric magnetic fields anchored in a surrounding accretion disk, might exhibit an analogous resonance. If so then the hole's outflowing jet might be modulated at resonant frequencies ω=(m/2) ΩH where m is an integer and ΩH is the hole's angular velocity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is a sparse number of credible source models available from large-magnitude past earthquakes. A stochastic source model generation algorithm thus becomes necessary for robust risk quantification using scenario earthquakes. We present an algorithm that combines the physics of fault ruptures as imaged in laboratory earthquakes with stress estimates on the fault constrained by field observations to generate stochastic source models for large-magnitude (Mw 6.0-8.0) strike-slip earthquakes. The algorithm is validated through a statistical comparison of synthetic ground motion histories from a stochastically generated source model for a magnitude 7.90 earthquake and a kinematic finite-source inversion of an equivalent magnitude past earthquake on a geometrically similar fault. The synthetic dataset comprises of three-component ground motion waveforms, computed at 636 sites in southern California, for ten hypothetical rupture scenarios (five hypocenters, each with two rupture directions) on the southern San Andreas fault. A similar validation exercise is conducted for a magnitude 6.0 earthquake, the lower magnitude limit for the algorithm. Additionally, ground motions from the Mw7.9 earthquake simulations are compared against predictions by the Campbell-Bozorgnia NGA relation as well as the ShakeOut scenario earthquake. The algorithm is then applied to generate fifty source models for a hypothetical magnitude 7.9 earthquake originating at Parkfield, with rupture propagating from north to south (towards Wrightwood), similar to the 1857 Fort Tejon earthquake. Using the spectral element method, three-component ground motion waveforms are computed in the Los Angeles basin for each scenario earthquake and the sensitivity of ground shaking intensity to seismic source parameters (such as the percentage of asperity area relative to the fault area, rupture speed, and risetime) is studied.

Under plausible San Andreas fault earthquakes in the next 30 years, modeled using the stochastic source algorithm, the performance of two 18-story steel moment frame buildings (UBC 1982 and 1997 designs) in southern California is quantified. The approach integrates rupture-to-rafters simulations into the PEER performance based earthquake engineering (PBEE) framework. Using stochastic sources and computational seismic wave propagation, three-component ground motion histories at 636 sites in southern California are generated for sixty scenario earthquakes on the San Andreas fault. The ruptures, with moment magnitudes in the range of 6.0-8.0, are assumed to occur at five locations on the southern section of the fault. Two unilateral rupture propagation directions are considered. The 30-year probabilities of all plausible ruptures in this magnitude range and in that section of the fault, as forecast by the United States Geological Survey, are distributed among these 60 earthquakes based on proximity and moment release. The response of the two 18-story buildings hypothetically located at each of the 636 sites under 3-component shaking from all 60 events is computed using 3-D nonlinear time-history analysis. Using these results, the probability of the structural response exceeding Immediate Occupancy (IO), Life-Safety (LS), and Collapse Prevention (CP) performance levels under San Andreas fault earthquakes over the next thirty years is evaluated.

Furthermore, the conditional and marginal probability distributions of peak ground velocity (PGV) and displacement (PGD) in Los Angeles and surrounding basins due to earthquakes occurring primarily on the mid-section of southern San Andreas fault are determined using Bayesian model class identification. Simulated ground motions at sites within 55-75km from the source from a suite of 60 earthquakes (Mw 6.0 − 8.0) primarily rupturing mid-section of San Andreas fault are considered for PGV and PGD data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Experimental measurements of rate of energy loss were made for protons of energy .5 to 1.6 MeV channeling through 1 μm thick silicon targets along the <110>, <111>, and <211> axial directions, and the {100}, {110}, {111}, and {211} planar directions. A .05% resolution automatically controlled magnetic spectrometer was used. The data are presented graphically along with an extensive summary of data in the literature. The data taken cover a wider range of channels than has previously been examined, and are in agreement with the data of F. Eisen, et al., Radd. Eff. 13, 93 (1972).

The theory in the literature for channeling energy loss due to interaction with local electrons, core electrons, and distant valence electrons of the crystal atoms is summarized. Straggling is analyzed, and a computer program which calculates energy loss and straggling using this theory and the Moliere approximation to the Thomas Fermi potential, VTF, and the detailed silicon crystal structure is described. Values for the local electron density Zloc in each of the channels listed above are extracted from the data by graphical matching of the experimental and computer results.

Zeroth and second order contributions to Zloc as a function of distance from the center of the channel were computed from ∇2VTF = 4πρ for various channels in silicon. For data taken in this work and data of F. Eisen, et al., Rad. Eff. 13, 93 (1972), the calculated zeroth order contribution to Zloc lies between the experimentally extracted Zloc values obtained by using the peak and the leading edge of the transmission spectra, suggesting that the observed straggling is due both to statistical fluctuations and to path variation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The reaction K-p→K-π+n has been studied for incident kaon momenta of 2.0 GeV/c. A sample of 19,881 events was obtained by a measurement of film taken as part of the K-63 experiment in the Berkeley 72 inch bubble chamber.

Based upon our analysis, we have reached four conclusions. (1) The magnitude of the extrapolated Kπ cross section differs by a factor of 2 from the P-wave unitarity prediction and the K+n results; this is probably due to absorptive effects. (2) Fits to the moments yield precise values for the Kπ S-wave which agree with other recent statistically accurate experiments. (3) An anomalous peak is present in our backward K-p→(π+n) K- u-distribution. (4) We find a non-linear enhancement due to interference similiar to the one found by Bland et al. (Bland 1966).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A review is presented of the statistical bootstrap model of Hagedorn and Frautschi. This model is an attempt to apply the methods of statistical mechanics in high-energy physics, while treating all hadron states (stable or unstable) on an equal footing. A statistical calculation of the resonance spectrum on this basis leads to an exponentially rising level density ρ(m) ~ cm-3 eβom at high masses.

In the present work, explicit formulae are given for the asymptotic dependence of the level density on quantum numbers, in various cases. Hamer and Frautschi's model for a realistic hadron spectrum is described.

A statistical model for hadron reactions is then put forward, analogous to the Bohr compound nucleus model in nuclear physics, which makes use of this level density. Some general features of resonance decay are predicted. The model is applied to the process of NN annihilation at rest with overall success, and explains the high final state pion multiplicity, together with the low individual branching ratios into two-body final states, which are characteristic of the process. For more general reactions, the model needs modification to take account of correlation effects. Nevertheless it is capable of explaining the phenomenon of limited transverse momenta, and the exponential decrease in the production frequency of heavy particles with their mass, as shown by Hagedorn. Frautschi's results on "Ericson fluctuations" in hadron physics are outlined briefly. The value of βo required in all these applications is consistently around [120 MeV]-1 corresponding to a "resonance volume" whose radius is very close to ƛπ. The construction of a "multiperipheral cluster model" for high-energy collisions is advocated.