15 resultados para STOCHASTIC SEARCH

em CaltechTHESIS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many particles proposed by theories, such as GUT monopoles, nuclearites and 1/5 charge superstring particles, can be categorized as Slow-moving, Ionizing, Massive Particles (SIMPs).

Detailed calculations of the signal-to-noise ratios in vanous acoustic and mechanical methods for detecting such SIMPs are presented. It is shown that the previous belief that such methods are intrinsically prohibited by the thermal noise is incorrect, and that ways to solve the thermal noise problem are already within the reach of today's technology. In fact, many running and finished gravitational wave detection ( GWD) experiments are already sensitive to certain SIMPs. As an example, a published GWD result is used to obtain a flux limit for nuclearites.

The result of a search using a scintillator array on Earth's surface is reported. A flux limit of 4.7 x 10^(-12) cm^(-2)sr^(-1)s^(-1) (90% c.l.) is set for any SIMP with 2.7 x 10^(-4) less than β less than 5 x 10^(-3) and ionization greater than 1/3 of minimum ionizing muons. Although this limit is above the limits from underground experiments for typical supermassive particles (10^(16)GeV), it is a new limit in certain β and ionization regions for less massive ones (~10^9 GeV) not able to penetrate deep underground, and implies a stringent limit on the fraction of the dark matter that can be composed of massive electrically and/ or magnetically charged particles.

The prospect of the future SIMP search in the MACRO detector is discussed. The special problem of SIMP trigger is examined and a circuit proposed, which may solve most of the problems of the previous ones proposed or used by others and may even enable MACRO to detect certain SIMP species with β as low as the orbital velocity around the earth.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem of "exit against a flow" for dynamical systems subject to small Gaussian white noise excitation is studied. Here the word "flow" refers to the behavior in phase space of the unperturbed system's state variables. "Exit against a flow" occurs if a perturbation causes the phase point to leave a phase space region within which it would normally be confined. In particular, there are two components of the problem of exit against a flow:

i) the mean exit time

ii) the phase-space distribution of exit locations.

When the noise perturbing the dynamical systems is small, the solution of each component of the problem of exit against a flow is, in general, the solution of a singularly perturbed, degenerate elliptic-parabolic boundary value problem.

Singular perturbation techniques are used to express the asymptotic solution in terms of an unknown parameter. The unknown parameter is determined using the solution of the adjoint boundary value problem.

The problem of exit against a flow for several dynamical systems of physical interest is considered, and the mean exit times and distributions of exit positions are calculated. The systems are then simulated numerically, using Monte Carlo techniques, in order to determine the validity of the asymptotic solutions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A theory of two-point boundary value problems analogous to the theory of initial value problems for stochastic ordinary differential equations whose solutions form Markov processes is developed. The theory of initial value problems consists of three main parts: the proof that the solution process is markovian and diffusive; the construction of the Kolmogorov or Fokker-Planck equation of the process; and the proof that the transistion probability density of the process is a unique solution of the Fokker-Planck equation.

It is assumed here that the stochastic differential equation under consideration has, as an initial value problem, a diffusive markovian solution process. When a given boundary value problem for this stochastic equation almost surely has unique solutions, we show that the solution process of the boundary value problem is also a diffusive Markov process. Since a boundary value problem, unlike an initial value problem, has no preferred direction for the parameter set, we find that there are two Fokker-Planck equations, one for each direction. It is shown that the density of the solution process of the boundary value problem is the unique simultaneous solution of this pair of Fokker-Planck equations.

This theory is then applied to the problem of a vibrating string with stochastic density.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Part I of the thesis describes the olfactory searching and scanning behaviors of rats in a wind tunnel, and a detailed movement analysis of terrestrial arthropod olfactory scanning behavior. Olfactory scanning behaviors in rats may be a behavioral correlate to hippocampal place cell activity.

Part II focuses on the organization of olfactory perception, what it suggests about a natural order for chemicals in the environment, and what this in tum suggests about the organization of the olfactory system. A model of odor quality space (analogous to the "color wheel") is presented. This model defines relationships between odor qualities perceived by human subjects based on a quantitative similarity measure. Compounds containing Carbon, Nitrogen, or Sulfur elicit odors that are contiguous in this odor representation, which thus allows one to predict the broad class of odor qualities a compound is likely to elicit. Based on these findings, a natural organization for olfactory stimuli is hypothesized: the order provided by the metabolic process. This hypothesis is tested by comparing compounds that are structurally similar, perceptually similar, and metabolically similar in a psychophysical cross-adaptation paradigm. Metabolically similar compounds consistently evoked shifts in odor quality and intensity under cross-adaptation, while compounds that were structurally similar or perceptually similar did not. This suggests that the olfactory system may process metabolically similar compounds using the same neural pathways, and that metabolic similarity may be the fundamental metric about which olfactory processing is organized. In other words, the olfactory system may be organized around a biological basis.

The idea of a biological basis for olfactory perception represents a shift in how olfaction is understood. The biological view has predictive power while the current chemical view does not, and the biological view provides explanations for some of the most basic questions in olfaction, that are unanswered in the chemical view. Existing data do not disprove a biological view, and are consistent with basic hypotheses that arise from this viewpoint.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Rates for A(e, e'p) on the nuclei ^2H, C, Fe, and Au have been measured at momentum transfers Q^2 = 1, 3, 5, and 6.8 (GeV fc)^2 . We extract the nuclear transparency T, a measure of the importance of final state interactions (FSI) between the outgoing proton and the recoil nucleus. Some calculations based on perturbative QCD predict an increase in T with momentum transfer, a phenomenon known as Color Transparency. No statistically significant rise is seen in the present experiment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A search for dielectron decays of heavy neutral resonances has been performed using proton-proton collision data collected at √s = 7 TeV by the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) in 2011. The data sample corresponds to an integrated luminosity of 5 fb−1. The dielectron mass distribution is consistent with Standard Model (SM) predictions. An upper limit on the ratio of the cross section times branching fraction of new bosons, normalized to the cross section times branching fraction of the Z boson, is set at the 95 % confidence level. This result is translated into limits on the mass of new neutral particles at the level of 2120 GeV for the Z′ in the Sequential Standard Model, 1810 GeV for the superstring-inspired Z′ψ resonance, and 1940 (1640) GeV for Kaluza-Klein gravitons with the coupling parameter k/MPl of 0.10 (0.05).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The AM CVn systems are a rare class of ultra-compact astrophysical binaries. With orbital periods of under an hour and as short as five minutes, they are among the closest known binary star systems and their evolution has direct relevance to the type Ia supernova rate and the white dwarf binary population. However, their faint and rare nature has made population studies of these systems difficult and several studies have found conflicting results.

I undertook a survey for AM CVn systems using the Palomar Transient Factory (PTF) astrophysical synoptic survey by exploiting the "outbursts" these systems undergo. Such events result in an increase in luminosity by a factor of up to two-hundred and are detectable in time-domain photometric data of AM CVn systems. My search resulted in the discovery of eight new systems, over 20% of the current known population. More importantly, this search was done in a systematic fashion, which allows for a population study properly accounting for biases.

Apart from the discovery of new systems, I used the time-domain data from the PTF and other synoptic surveys to better understand the long-term behavior of these systems. This analysis of the photometric behavior of the majority of known AM CVn systems has shown changes in their behavior at longer time scales than have previously been observed. This has allowed me to find relationships between the outburst properties of an individual system and its orbital period.

Even more importantly, the systematically selected sample together with these properties have allowed me to conduct a population study of the AM CVn systems. I have shown that the latest published estimates of the AM CVn system population, a factor of fifty below theoretical estimates, are consistent with the sample of systems presented here. This is particularly noteworthy since my population study is most sensitive to a different orbital period regime than earlier surveys. This confirmation of the population density will allow the AM CVn systems population to be used in the study of other areas of astrophysics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A general framework for multi-criteria optimal design is presented which is well-suited for automated design of structural systems. A systematic computer-aided optimal design decision process is developed which allows the designer to rapidly evaluate and improve a proposed design by taking into account the major factors of interest related to different aspects such as design, construction, and operation.

The proposed optimal design process requires the selection of the most promising choice of design parameters taken from a large design space, based on an evaluation using specified criteria. The design parameters specify a particular design, and so they relate to member sizes, structural configuration, etc. The evaluation of the design uses performance parameters which may include structural response parameters, risks due to uncertain loads and modeling errors, construction and operating costs, etc. Preference functions are used to implement the design criteria in a "soft" form. These preference functions give a measure of the degree of satisfaction of each design criterion. The overall evaluation measure for a design is built up from the individual measures for each criterion through a preference combination rule. The goal of the optimal design process is to obtain a design that has the highest overall evaluation measure - an optimization problem.

Genetic algorithms are stochastic optimization methods that are based on evolutionary theory. They provide the exploration power necessary to explore high-dimensional search spaces to seek these optimal solutions. Two special genetic algorithms, hGA and vGA, are presented here for continuous and discrete optimization problems, respectively.

The methodology is demonstrated with several examples involving the design of truss and frame systems. These examples are solved by using the proposed hGA and vGA.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Partial differential equations (PDEs) with multiscale coefficients are very difficult to solve due to the wide range of scales in the solutions. In the thesis, we propose some efficient numerical methods for both deterministic and stochastic PDEs based on the model reduction technique.

For the deterministic PDEs, the main purpose of our method is to derive an effective equation for the multiscale problem. An essential ingredient is to decompose the harmonic coordinate into a smooth part and a highly oscillatory part of which the magnitude is small. Such a decomposition plays a key role in our construction of the effective equation. We show that the solution to the effective equation is smooth, and could be resolved on a regular coarse mesh grid. Furthermore, we provide error analysis and show that the solution to the effective equation plus a correction term is close to the original multiscale solution.

For the stochastic PDEs, we propose the model reduction based data-driven stochastic method and multilevel Monte Carlo method. In the multiquery, setting and on the assumption that the ratio of the smallest scale and largest scale is not too small, we propose the multiscale data-driven stochastic method. We construct a data-driven stochastic basis and solve the coupled deterministic PDEs to obtain the solutions. For the tougher problems, we propose the multiscale multilevel Monte Carlo method. We apply the multilevel scheme to the effective equations and assemble the stiffness matrices efficiently on each coarse mesh grid. In both methods, the $\KL$ expansion plays an important role in extracting the main parts of some stochastic quantities.

For both the deterministic and stochastic PDEs, numerical results are presented to demonstrate the accuracy and robustness of the methods. We also show the computational time cost reduction in the numerical examples.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Hamilton Jacobi Bellman (HJB) equation is central to stochastic optimal control (SOC) theory, yielding the optimal solution to general problems specified by known dynamics and a specified cost functional. Given the assumption of quadratic cost on the control input, it is well known that the HJB reduces to a particular partial differential equation (PDE). While powerful, this reduction is not commonly used as the PDE is of second order, is nonlinear, and examples exist where the problem may not have a solution in a classical sense. Furthermore, each state of the system appears as another dimension of the PDE, giving rise to the curse of dimensionality. Since the number of degrees of freedom required to solve the optimal control problem grows exponentially with dimension, the problem becomes intractable for systems with all but modest dimension.

In the last decade researchers have found that under certain, fairly non-restrictive structural assumptions, the HJB may be transformed into a linear PDE, with an interesting analogue in the discretized domain of Markov Decision Processes (MDP). The work presented in this thesis uses the linearity of this particular form of the HJB PDE to push the computational boundaries of stochastic optimal control.

This is done by crafting together previously disjoint lines of research in computation. The first of these is the use of Sum of Squares (SOS) techniques for synthesis of control policies. A candidate polynomial with variable coefficients is proposed as the solution to the stochastic optimal control problem. An SOS relaxation is then taken to the partial differential constraints, leading to a hierarchy of semidefinite relaxations with improving sub-optimality gap. The resulting approximate solutions are shown to be guaranteed over- and under-approximations for the optimal value function. It is shown that these results extend to arbitrary parabolic and elliptic PDEs, yielding a novel method for Uncertainty Quantification (UQ) of systems governed by partial differential constraints. Domain decomposition techniques are also made available, allowing for such problems to be solved via parallelization and low-order polynomials.

The optimization-based SOS technique is then contrasted with the Separated Representation (SR) approach from the applied mathematics community. The technique allows for systems of equations to be solved through a low-rank decomposition that results in algorithms that scale linearly with dimensionality. Its application in stochastic optimal control allows for previously uncomputable problems to be solved quickly, scaling to such complex systems as the Quadcopter and VTOL aircraft. This technique may be combined with the SOS approach, yielding not only a numerical technique, but also an analytical one that allows for entirely new classes of systems to be studied and for stability properties to be guaranteed.

The analysis of the linear HJB is completed by the study of its implications in application. It is shown that the HJB and a popular technique in robotics, the use of navigation functions, sit on opposite ends of a spectrum of optimization problems, upon which tradeoffs may be made in problem complexity. Analytical solutions to the HJB in these settings are available in simplified domains, yielding guidance towards optimality for approximation schemes. Finally, the use of HJB equations in temporal multi-task planning problems is investigated. It is demonstrated that such problems are reducible to a sequence of SOC problems linked via boundary conditions. The linearity of the PDE allows us to pre-compute control policy primitives and then compose them, at essentially zero cost, to satisfy a complex temporal logic specification.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The LIGO and Virgo gravitational-wave observatories are complex and extremely sensitive strain detectors that can be used to search for a wide variety of gravitational waves from astrophysical and cosmological sources. In this thesis, I motivate the search for the gravitational wave signals from coalescing black hole binary systems with total mass between 25 and 100 solar masses. The mechanisms for formation of such systems are not well-understood, and we do not have many observational constraints on the parameters that guide the formation scenarios. Detection of gravitational waves from such systems — or, in the absence of detection, the tightening of upper limits on the rate of such coalescences — will provide valuable information that can inform the astrophysics of the formation of these systems. I review the search for these systems and place upper limits on the rate of black hole binary coalescences with total mass between 25 and 100 solar masses. I then show how the sensitivity of this search can be improved by up to 40% by the the application of the multivariate statistical classifier known as a random forest of bagged decision trees to more effectively discriminate between signal and non-Gaussian instrumental noise. I also discuss the use of this classifier in the search for the ringdown signal from the merger of two black holes with total mass between 50 and 450 solar masses and present upper limits. I also apply multivariate statistical classifiers to the problem of quantifying the non-Gaussianity of LIGO data. Despite these improvements, no gravitational-wave signals have been detected in LIGO data so far. However, the use of multivariate statistical classification can significantly improve the sensitivity of the Advanced LIGO detectors to such signals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

While synoptic surveys in the optical and at high energies have revealed a rich discovery phase space of slow transients, a similar yield is still awaited in the radio. Majority of the past blind surveys, carried out with radio interferometers, have suffered from a low yield of slow transients, ambiguous transient classifications, and contamination by false positives. The newly-refurbished Karl G. Jansky Array (Jansky VLA) offers wider bandwidths for accurate RFI excision as well as substantially-improved sensitivity and survey speed compared with the old VLA. The Jansky VLA thus eliminates the pitfalls of interferometric transient search by facilitating sensitive, wide-field, and near-real-time radio surveys and enabling a systematic exploration of the dynamic radio sky. This thesis aims at carrying out blind Jansky VLA surveys for characterizing the radio variable and transient sources at frequencies of a few GHz and on timescales between days and years. Through joint radio and optical surveys, the thesis addresses outstanding questions pertaining to the rates of slow radio transients (e.g. radio supernovae, tidal disruption events, binary neutron star mergers, stellar flares, etc.), the false-positive foreground relevant for the radio and optical counterpart searches of gravitational wave sources, and the beaming factor of gamma-ray bursts. The need for rapid processing of the Jansky VLA data and near-real-time radio transient search has enabled the development of state-of-the-art software infrastructure. This thesis has successfully demonstrated the Jansky VLA as a powerful transient search instrument, and it serves as a pathfinder for the transient surveys planned for the SKA-mid pathfinder facilities, viz. ASKAP, MeerKAT, and WSRT/Apertif.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A general review of stochastic processes is given in the introduction; definitions, properties and a rough classification are presented together with the position and scope of the author's work as it fits into the general scheme.

The first section presents a brief summary of the pertinent analytical properties of continuous stochastic processes and their probability-theoretic foundations which are used in the sequel.

The remaining two sections (II and III), comprising the body of the work, are the author's contribution to the theory. It turns out that a very inclusive class of continuous stochastic processes are characterized by a fundamental partial differential equation and its adjoint (the Fokker-Planck equations). The coefficients appearing in those equations assimilate, in a most concise way, all the salient properties of the process, freed from boundary value considerations. The writer’s work consists in characterizing the processes through these coefficients without recourse to solving the partial differential equations.

First, a class of coefficients leading to a unique, continuous process is presented, and several facts are proven to show why this class is restricted. Then, in terms of the coefficients, the unconditional statistics are deduced, these being the mean, variance and covariance. The most general class of coefficients leading to the Gaussian distribution is deduced, and a complete characterization of these processes is presented. By specializing the coefficients, all the known stochastic processes may be readily studied, and some examples of these are presented; viz. the Einstein process, Bachelier process, Ornstein-Uhlenbeck process, etc. The calculations are effectively reduced down to ordinary first order differential equations, and in addition to giving a comprehensive characterization, the derivations are materially simplified over the solution to the original partial differential equations.

In the last section the properties of the integral process are presented. After an expository section on the definition, meaning, and importance of the integral process, a particular example is carried through starting from basic definition. This illustrates the fundamental properties, and an inherent paradox. Next the basic coefficients of the integral process are studied in terms of the original coefficients, and the integral process is uniquely characterized. It is shown that the integral process, with a slight modification, is a continuous Markoff process.

The elementary statistics of the integral process are deduced: means, variances, and covariances, in terms of the original coefficients. It is shown that an integral process is never temporally homogeneous in a non-degenerate process.

Finally, in terms of the original class of admissible coefficients, the statistics of the integral process are explicitly presented, and the integral process of all known continuous processes are specified.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An array of two spark chambers and six trays of plastic scintillation counters was used to search for unaccompanied fractionally charged particles in cosmic rays near sea level. No acceptable events were found with energy losses by ionization between 0.04 and 0.7 that of unit-charged minimum-ionizing particles. New 90%-confidence upper limits were thereby established for the fluxes of fractionally charged particles in cosmic rays, namely, (1.04 ± 0.07)x10-10 and (2.03 ± 0.16)x10-10 cm-2sr-1sec-1 for minimum-ionizing particles with charges 1/3 and 2/3, respectively.

In order to be certain that the spark chambers could have functioned for the low levels of ionization expected from particles with small fractional charges, tests were conducted to estimate the efficiency of the chambers as they had been used in this experiment. These tests showed that the spark-chamber system with the track-selection criteria used might have been over 99% efficient for the entire range of energy losses considered.

Lower limits were then obtained for the mass of a quark by considering the above flux limits and a particular model for the production of quarks in cosmic rays. In this model, which is one involving the multi-peripheral Regge hypothesis, the production cross section and a corresponding mass limit are critically dependent on the Regge trajectory assigned to a quark. If quarks are "elementary'' with a flat trajectory, the mass of a quark can be expected to be at least 6 ± 2 BeV/c2. If quarks have a trajectory with unit slope, just as the existing hadrons do, the mass of a quark might be as small as 1.3 ± 0.2 BeV/c2. For a trajectory with unit slope and a mass larger than a couple of BeV/c2, the production cross section may be so low that quarks might never be observed in nature.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

H. J. Kushner has obtained the differential equation satisfied by the optimal feedback control law for a stochastic control system in which the plant dynamics and observations are perturbed by independent additive Gaussian white noise processes. However, the differentiation includes the first and second functional derivatives and, except for a restricted set of systems, is too complex to solve with present techniques.

This investigation studies the optimal control law for the open loop system and incorporates it in a sub-optimal feedback control law. This suboptimal control law's performance is at least as good as that of the optimal control function and satisfies a differential equation involving only the first functional derivative. The solution of this equation is equivalent to solving two two-point boundary valued integro-partial differential equations. An approximate solution has advantages over the conventional approximate solution of Kushner's equation.

As a result of this study, well known results of deterministic optimal control are deduced from the analysis of optimal open loop control.