11 resultados para Model-based bootstrap

em CaltechTHESIS


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The olfactory bulb of mammals aids in the discrimination of odors. A mathematical model based on the bulbar anatomy and electrophysiology is described. Simulations of the highly non-linear model produce a 35-60 Hz modulated activity, which is coherent across the bulb. The decision states (for the odor information) in this system can be thought of as stable cycles, rather than as point stable states typical of simpler neuro-computing models. Analysis shows that a group of coupled non-linear oscillators are responsible for the oscillatory activities. The output oscillation pattern of the bulb is determined by the odor input. The model provides a framework in which to understand the transformation between odor input and bulbar output to the olfactory cortex. This model can also be extended to other brain areas such as the hippocampus, thalamus, and neocortex, which show oscillatory neural activities. There is significant correspondence between the model behavior and observed electrophysiology.

It has also been suggested that the olfactory bulb, the first processing center after the sensory cells in the olfactory pathway, plays a role in olfactory adaptation, odor sensitivity enhancement by motivation, and other olfactory psychophysical phenomena. The input from the higher olfactory centers to the inhibitory cells in the bulb are shown to be able to modulate the response, and thus the sensitivity, of the bulb to odor input. It follows that the bulb can decrease its sensitivity to a pre-existing and detected odor (adaptation) while remaining sensitive to new odors, or can increase its sensitivity to discover interesting new odors. Other olfactory psychophysical phenomena such as cross-adaptation are also discussed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the first part of this thesis we search for beyond the Standard Model physics through the search for anomalous production of the Higgs boson using the razor kinematic variables. We search for anomalous Higgs boson production using proton-proton collisions at center of mass energy √s=8 TeV collected by the Compact Muon Solenoid experiment at the Large Hadron Collider corresponding to an integrated luminosity of 19.8 fb-1.

In the second part we present a novel method for using a quantum annealer to train a classifier to recognize events containing a Higgs boson decaying to two photons. We train that classifier using simulated proton-proton collisions at √s=8 TeV producing either a Standard Model Higgs boson decaying to two photons or a non-resonant Standard Model process that produces a two photon final state.

The production mechanisms of the Higgs boson are precisely predicted by the Standard Model based on its association with the mechanism of electroweak symmetry breaking. We measure the yield of Higgs bosons decaying to two photons in kinematic regions predicted to have very little contribution from a Standard Model Higgs boson and search for an excess of events, which would be evidence of either non-standard production or non-standard properties of the Higgs boson. We divide the events into disjoint categories based on kinematic properties and the presence of additional b-quarks produced in the collisions. In each of these disjoint categories, we use the razor kinematic variables to characterize events with topological configurations incompatible with typical configurations found from standard model production of the Higgs boson.

We observe an excess of events with di-photon invariant mass compatible with the Higgs boson mass and localized in a small region of the razor plane. We observe 5 events with a predicted background of 0.54 ± 0.28, which observation has a p-value of 10-3 and a local significance of 3.35σ. This background prediction comes from 0.48 predicted non-resonant background events and 0.07 predicted SM higgs boson events. We proceed to investigate the properties of this excess, finding that it provides a very compelling peak in the di-photon invariant mass distribution and is physically separated in the razor plane from predicted background. Using another method of measuring the background and significance of the excess, we find a 2.5σ deviation from the Standard Model hypothesis over a broader range of the razor plane.

In the second part of the thesis we transform the problem of training a classifier to distinguish events with a Higgs boson decaying to two photons from events with other sources of photon pairs into the Hamiltonian of a spin system, the ground state of which is the best classifier. We then use a quantum annealer to find the ground state of this Hamiltonian and train the classifier. We find that we are able to do this successfully in less than 400 annealing runs for a problem of median difficulty at the largest problem size considered. The networks trained in this manner exhibit good classification performance, competitive with the more complicated machine learning techniques, and are highly resistant to overtraining. We also find that the nature of the training gives access to additional solutions that can be used to improve the classification performance by up to 1.2% in some regions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A series of experiments was conducted on the use of a device to passively generate vortex rings, henceforth a passive vortex generator (PVG). The device is intended as a means of propulsion for underwater vehicles, as the use of vortex rings has been shown to decrease the fuel consumption of a vehicle by up to 40% Ruiz (2010).

The PVG was constructed out of a collapsible tube encased in a rigid, airtight box. By adjusting the pressure within the airtight box while fluid was flowing through the tube, it was possible to create a pulsed jet with vortex rings via self-excited oscillations of the collapsible tube.

A study of PVG integration into an existing autonomous underwater vehicle (AUV) system was conducted. A small AUV was used to retrofit a PVG with limited alterations to the original vehicle. The PVG-integrated AUV was used for self-propelled testing to measure the hydrodynamic (Froude) efficiency of the system. The results show that the PVG-integrated AUV had a 22% increase in the Froude efficiency using a pulsed jet over a steady jet. The maximum increase in the Froude efficiency was realized when the formation time of the pulsed jet, a nondimensional time to characterize vortex ring formation, was coincident with vortex ring pinch-off. This is consistent with previous studies that indicate that the maximization of efficiency for a pulsed jet vehicle is realized when the formation of vortex rings maximizes the vortex ring energy and size.

The other study was a parameter study of the physical dimensions of a PVG. This study was conducted to determine the effect of the tube diameter and length on the oscillation characteristics such as the frequency. By changing the tube diameter and length by factors of 3, the frequency of self-excited oscillations was found to scale as f~D_0^{-1/2} L_0^0, where D_0 is the tube diameter and L_0 the tube length. The mechanism of operation is suggested to rely on traveling waves between the tube throat and the end of the tube. A model based on this mechanism yields oscillation frequencies that are within the range observed by the experiment.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this work, the development of a probabilistic approach to robust control is motivated by structural control applications in civil engineering. Often in civil structural applications, a system's performance is specified in terms of its reliability. In addition, the model and input uncertainty for the system may be described most appropriately using probabilistic or "soft" bounds on the model and input sets. The probabilistic robust control methodology contrasts with existing H∞/μ robust control methodologies that do not use probability information for the model and input uncertainty sets, yielding only the guaranteed (i.e., "worst-case") system performance, and no information about the system's probable performance which would be of interest to civil engineers.

The design objective for the probabilistic robust controller is to maximize the reliability of the uncertain structure/controller system for a probabilistically-described uncertain excitation. The robust performance is computed for a set of possible models by weighting the conditional performance probability for a particular model by the probability of that model, then integrating over the set of possible models. This integration is accomplished efficiently using an asymptotic approximation. The probable performance can be optimized numerically over the class of allowable controllers to find the optimal controller. Also, if structural response data becomes available from a controlled structure, its probable performance can easily be updated using Bayes's Theorem to update the probability distribution over the set of possible models. An updated optimal controller can then be produced, if desired, by following the original procedure. Thus, the probabilistic framework integrates system identification and robust control in a natural manner.

The probabilistic robust control methodology is applied to two systems in this thesis. The first is a high-fidelity computer model of a benchmark structural control laboratory experiment. For this application, uncertainty in the input model only is considered. The probabilistic control design minimizes the failure probability of the benchmark system while remaining robust with respect to the input model uncertainty. The performance of an optimal low-order controller compares favorably with higher-order controllers for the same benchmark system which are based on other approaches. The second application is to the Caltech Flexible Structure, which is a light-weight aluminum truss structure actuated by three voice coil actuators. A controller is designed to minimize the failure probability for a nominal model of this system. Furthermore, the method for updating the model-based performance calculation given new response data from the system is illustrated.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Progress is made on the numerical modeling of both laminar and turbulent non-premixed flames. Instead of solving the transport equations for the numerous species involved in the combustion process, the present study proposes reduced-order combustion models based on local flame structures.

For laminar non-premixed flames, curvature and multi-dimensional diffusion effects are found critical for the accurate prediction of sooting tendencies. A new numerical model based on modified flamelet equations is proposed. Sooting tendencies are calculated numerically using the proposed model for a wide range of species. These first numerically-computed sooting tendencies are in good agreement with experimental data. To further quantify curvature and multi-dimensional effects, a general flamelet formulation is derived mathematically. A budget analysis of the general flamelet equations is performed on an axisymmetric laminar diffusion flame. A new chemistry tabulation method based on the general flamelet formulation is proposed. This new tabulation method is applied to the same flame and demonstrates significant improvement compared to previous techniques.

For turbulent non-premixed flames, a new model to account for chemistry-turbulence interactions is proposed. %It is found that these interactions are not important for radicals and small species, but substantial for aromatic species. The validity of various existing flamelet-based chemistry tabulation methods is examined, and a new linear relaxation model is proposed for aromatic species. The proposed relaxation model is validated against full chemistry calculations. To further quantify the importance of aromatic chemistry-turbulence interactions, Large-Eddy Simulations (LES) have been performed on a turbulent sooting jet flame. %The aforementioned relaxation model is used to provide closure for the chemical source terms of transported aromatic species. The effects of turbulent unsteadiness on soot are highlighted by comparing the LES results with a separate LES using fully-tabulated chemistry. It is shown that turbulent unsteady effects are of critical importance for the accurate prediction of not only the inception locations, but also the magnitude and fluctuations of soot.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A recirculating charge-coupled device structure has been devised. Entrance and exit gates allow a signal to be admitted, recirculated a given number of times, and then examined. In this way a small device permits simulation of a very long shift register without passing the signal through input and output diffusions. An oscilloscope motion picture demonstrating degradation of an actual circulating signal has been made. The performance of the device in simulating degradation of a signal by a very long shift register is well fit by a simple model based on transfer inefficiency.

Electrical properties of the mercury selenide on n-type chemically-cleaned silicon Schottky barrier have been studied. Barrier heights measured were 0.96 volts for the photoresponse technique and 0.90 volts for the current-voltage technique. These are the highest barriers yet reported on n-type silicon.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This work contains 4 topics dealing with the properties of the luminescence from Ge.

The temperature, pump-power and time dependences of the photoluminescence spectra of Li-, As-, Ga-, and Sb-doped Ge crystals were studied. For impurity concentrations less than about 1015cm-3, emissions due to electron-hole droplets can clearly be identified. For impurity concentrations on the order of 1016cm-3, the broad lines in the spectra, which have previously been attributed to the emission from the electron-hole-droplet, were found to possess pump-power and time dependent line shape. These properties show that these broad lines cannot be due to emission of electron-hole-droplets alone. We interpret these lines to be due to a combination of emissions from (1) electron-hole- droplets, (2) broadened multiexciton complexes, (3) broadened bound-exciton, and (4) plasma of electrons and holes. The properties of the electron-hole-droplet in As-doped Ge were shown to agree with theoretical predictions.

The time dependences of the luminescence intensities of the electron-hole-droplet in pure and doped Ge were investigated at 2 and 4.2°K. The decay of the electron-hole-droplet in pure Ge at 4.2°K was found to be pump-power dependent and too slow to be explained by the widely accepted model due to Pokrovskii and Hensel et al. Detailed study of the decay of the electron-hole-droplets in doped Ge were carried out for the first time, and we find no evidence of evaporation of excitons by electron-hole-droplets at 4.2°K. This doped Ge result is unexplained by the model of Pokrovskii and Hensel et al. It is shown that a model based on a cloud of electron-hole-droplets generated in the crystal and incorporating (1) exciton flow among electron-hole-droplets in the cloud and (2) exciton diffusion away from the cloud is capable of explaining the observed results.

It is shown that impurities, introduced during device fabrication, can lead to the previously reported differences of the spectra of laser-excited high-purity Ge and electrically excited Ge double injection devices. By properly choosing the device geometry so as to minimize this Li contamination, it is shown that the Li concentration in double injection devices may be reduced to less than about 1015cm-3 and electrically excited luminescence spectra similar to the photoluminescence spectra of pure Ge may be produced. This proves conclusively that electron-hole-droplets may be created in double injection devices by electrical excitation.

The ratio of the LA- to TO-phonon-assisted luminescence intensities of the electron-hole-droplet is demonstrated to be equal to the high temperature limit of the same ratio of the exciton for Ge. This result gives one confidence to determine similar ratios for the electron-hole-droplet from the corresponding exciton ratio in semiconductors in which the ratio for the electron-hole-droplet cannot be determined (e.g., Si and GaP). Knowing the value of this ratio for the electron-hole-droplet, one can obtain accurate values of many parameters of the electron-hole-droplet in these semiconductors spectroscopically.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The magnetic moments of amorphous ternary alloys containing Pd, Co and Si in atomic concentrations corresponding to Pd_(80-x)Co_xSi_(20) in which x is 3, 5, 7, 9, 10 and 11, have been measured between 1.8 and 300°K and in magnetic fields up to 8.35 kOe. The alloys were obtained by rapid quenching of a liquid droplet and their structures were analyzed by X-ray diffraction. The measurements were made in a null-coil pendulum magnetometer in which the temperature could be varied continuously without immersing the sample in a cryogenic liquid. The alloys containing 9 at.% Co or less obeyed Curie's Law over certain temperature ranges, and had negligible permanent moments at room temperature. Those containing 10 and 11 at.% Co followed Curie's Law only above approximately 200°K and had significant permanent moments at room temperature. For all alloys, the moments calculated from Curie's Law were too high to be accounted for by the moments of individual Co atoms. To explain these findings, a model based on the existence of superparamagnetic clustering is proposed. The cluster sizes calculated from the model are consistent with the rapid onset of ferromagnetism in the alloys containing 10 and 11 at.% Co and with the magnetic moments in an alloy containing 7 at.% Co heat treated in such a manner as to contain a small amount of a crystalline phase. In alloys containing 7 at.% Co or less, a maximum in the magnetization vs temperature curve was observed around 10°K. This maximum was eliminated by cooling the alloy in a magnetic field, and an explanation for this observation is suggested.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The initial probabilities of activated, dissociative chemisorption of methane and ethane on Pt(110)-(1 x 2) have been measured. The surface temperature was varied from 450 to 900 K with the reactant gas temperature constant at 300 K. Under these conditions, we probe the kinetics of dissociation via trapping-mediated (as opposed to 'direct') mechanism. It was found that the probabilities of dissociation of both methane and ethane were strong functions of the surface temperature with an apparent activation energies of 14.4 kcal/mol for methane and 2.8 kcal/mol for ethane, which implys that the methane and ethane molecules have fully accommodated to the surface temperature. Kinetic isotope effects were observed for both reactions, indicating that the C-H bond cleavage was involved in the rate-limiting step. A mechanistic model based on the trapping-mediated mechanism is used to explain the observed kinetic behavior. The activation energies for C-H bond dissociation of the thermally accommodated methane and ethane on the surface extracted from the model are 18.4 and 10.3 kcal/mol, respectively.

The studies of the catalytic decomposition of formic acid on the Ru(001) surface with thermal desorption mass spectrometry following the adsorption of DCOOH and HCOOH on the surface at 130 and 310 K are described. Formic acid (DCOOH) chemisorbs dissociatively on the surface via both the cleavage of its O-H bond to form a formate and a hydrogen adatom, and the cleavage of its C-O bond to form a carbon monoxide, a deuterium adatom and an hydroxyl (OH). The former is the predominant reaction. The rate of desorption of carbon dioxide is a direct measure of the kinetics of decomposition of the surface formate. It is characterized by a kinetic isotope effect, an increasingly narrow FWHM, and an upward shift in peak temperature with Ɵ_T, the coverage of the dissociatively adsorbed formic acid. The FWHM and the peak temperature change from 18 K and 326 K at Ɵ_T = 0.04 to 8 K and 395 K at Ɵ_T = 0.89. The increase in the apparent activation energy of the C-D bond cleavage is largely a result of self-poisoning by the formate, the presence of which on the surface alters the electronic properties of the surface such that the activation energy of the decomposition of formate is increased. The variation of the activation energy for carbon dioxide formation with Ɵ_T accounts for the observed sharp carbon dioxide peak. The coverage of surface formate can be adjusted over a relatively wide range so that the activation energy for C-D bond cleavage in the case of DCOOH can be adjusted to be below, approximately equal to, or well above the activation energy for the recombinative desorption of the deuterium adatoms. Accordingly, the desorption of deuterium was observed to be governed completely by the desorption kinetics of the deuterium adatoms at low Ɵ_T, jointly by the kinetics of deuterium desorption and C-D bond cleavage at intermediate Ɵ_T, and solely by the kinetics of C-D bond cleavage at high Ɵ_T. The overall branching ratio of the formate to carbon dioxide and carbon monoxide is approximately unity, regardless the initial coverage Ɵ_T, even though the activation energy for the production of carbon dioxide varies with Ɵ_T. The desorption of water, which implies C-O bond cleavage of the formate, appears at approximately the same temperature as that of carbon dioxide. These observations suggest that the cleavage of the C-D bond and that of the C-O bond of two surface formates are coupled, possibly via the formation of a short-lived surface complex that is the precursor to to the decomposition.

The measurement of steady-state rate is demonstrated here to be valuable in determining kinetics associated with short-lived, molecularly adsorbed precursor to further reactions on the surface, by determining the kinetic parameters of the molecular precursor of formaldehyde to its dissociation on the Pt(110)-(1 x 2) surface.

Overlayers of nitrogen adatoms on Ru(001) have been characterized both by thermal desorption mass spectrometry and low-energy electron diffraction, as well as chemically via the postadsorption and desorption of ammonia and carbon monoxide.

The nitrogen-adatom overlayer was prepared by decomposing ammonia thermally on the surface at a pressure of 2.8 x 10^(-6) Torr and a temperature of 480 K. The saturated overlayer prepared under these conditions has associated with it a (√247/10 x √247/10)R22.7° LEED pattern, has two peaks in its thermal desorption spectrum, and has a fractional surface coverage of 0.40. Annealing the overlayer to approximately 535 K results in a rather sharp (√3 x √3)R30° LEED pattern with an associated fractional surface coverage of one-third. Annealing the overlayer further to 620 K results in the disappearance of the low-temperature thermal desorption peak and the appearance of a rather fuzzy p(2x2) LEED pattern with an associated fractional surface coverage of approximately one-fourth. In the low coverage limit, the presence of the (√3 x √3)R30° N overlayer alters the surface in such a way that the binding energy of ammonia is increased by 20% relative to the clean surface, whereas that of carbon monoxide is reduced by 15%.

A general methodology for the indirect relative determination of the absolute fractional surface coverages has been developed and was utilized to determine the saturation fractional coverage of hydrogen on Ru(001). Formaldehyde was employed as a bridge to lead us from the known reference point of the saturation fractional coverage of carbon monoxide to unknown reference point of the fractional coverage of hydrogen on Ru(001), which is then used to determine accurately the saturation fractional coverage of hydrogen. We find that ƟSAT/H = 1.02 (±0.05), i.e., the surface stoichiometry is Ru : H = 1 : 1. The relative nature of the method, which cancels systematic errors, together with the utilization of a glass envelope around the mass spectrometer, which reduces spurious contributions in the thermal desorption spectra, results in high accuracy in the determination of absolute fractional coverages.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

These studies explore how, where, and when representations of variables critical to decision-making are represented in the brain. In order to produce a decision, humans must first determine the relevant stimuli, actions, and possible outcomes before applying an algorithm that will select an action from those available. When choosing amongst alternative stimuli, the framework of value-based decision-making proposes that values are assigned to the stimuli and that these values are then compared in an abstract “value space” in order to produce a decision. Despite much progress, in particular regarding the pinpointing of ventromedial prefrontal cortex (vmPFC) as a region that encodes the value, many basic questions remain. In Chapter 2, I show that distributed BOLD signaling in vmPFC represents the value of stimuli under consideration in a manner that is independent of the type of stimulus it is. Thus the open question of whether value is represented in abstraction, a key tenet of value-based decision-making, is confirmed. However, I also show that stimulus-dependent value representations are also present in the brain during decision-making and suggest a potential neural pathway for stimulus-to-value transformations that integrates these two results.

More broadly speaking, there is both neural and behavioral evidence that two distinct control systems are at work during action selection. These two systems compose the “goal-directed system”, which selects actions based on an internal model of the environment, and the “habitual” system, which generates responses based on antecedent stimuli only. Computational characterizations of these two systems imply that they have different informational requirements in terms of input stimuli, actions, and possible outcomes. Associative learning theory predicts that the habitual system should utilize stimulus and action information only, while goal-directed behavior requires that outcomes as well as stimuli and actions be processed. In Chapter 3, I test whether areas of the brain hypothesized to be involved in habitual versus goal-directed control represent the corresponding theorized variables.

The question of whether one or both of these neural systems drives Pavlovian conditioning is less well-studied. Chapter 4 describes an experiment in which subjects were scanned while engaged in a Pavlovian task with a simple non-trivial structure. After comparing a variety of model-based and model-free learning algorithms (thought to underpin goal-directed and habitual decision-making, respectively), it was found that subjects’ reaction times were better explained by a model-based system. In addition, neural signaling of precision, a variable based on a representation of a world model, was found in the amygdala. These data indicate that the influence of model-based representations of the environment can extend even to the most basic learning processes.

Knowledge of the state of hidden variables in an environment is required for optimal inference regarding the abstract decision structure of a given environment and therefore can be crucial to decision-making in a wide range of situations. Inferring the state of an abstract variable requires the generation and manipulation of an internal representation of beliefs over the values of the hidden variable. In Chapter 5, I describe behavioral and neural results regarding the learning strategies employed by human subjects in a hierarchical state-estimation task. In particular, a comprehensive model fit and comparison process pointed to the use of "belief thresholding". This implies that subjects tended to eliminate low-probability hypotheses regarding the state of the environment from their internal model and ceased to update the corresponding variables. Thus, in concert with incremental Bayesian learning, humans explicitly manipulate their internal model of the generative process during hierarchical inference consistent with a serial hypothesis testing strategy.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The early stage of laminar-turbulent transition in a hypervelocity boundary layer is studied using a combination of modal linear stability analysis, transient growth analysis, and direct numerical simulation. Modal stability analysis is used to clarify the behavior of first and second mode instabilities on flat plates and sharp cones for a wide range of high enthalpy flow conditions relevant to experiments in impulse facilities. Vibrational nonequilibrium is included in this analysis, its influence on the stability properties is investigated, and simple models for predicting when it is important are described.

Transient growth analysis is used to determine the optimal initial conditions that lead to the largest possible energy amplification within the flow. Such analysis is performed for both spatially and temporally evolving disturbances. The analysis again targets flows that have large stagnation enthalpy, such as those found in shock tunnels, expansion tubes, and atmospheric flight at high Mach numbers, and clarifies the effects of Mach number and wall temperature on the amplification achieved. Direct comparisons between modal and non-modal growth are made to determine the relative importance of these mechanisms under different flow regimes.

Conventional stability analysis employs the assumption that disturbances evolve with either a fixed frequency (spatial analysis) or a fixed wavenumber (temporal analysis). Direct numerical simulations are employed to relax these assumptions and investigate the downstream propagation of wave packets that are localized in space and time, and hence contain a distribution of frequencies and wavenumbers. Such wave packets are commonly observed in experiments and hence their amplification is highly relevant to boundary layer transition prediction. It is demonstrated that such localized wave packets experience much less growth than is predicted by spatial stability analysis, and therefore it is essential that the bandwidth of localized noise sources that excite the instability be taken into account in making transition estimates. A simple model based on linear stability theory is also developed which yields comparable results with an enormous reduction in computational expense. This enables the amplification of finite-width wave packets to be taken into account in transition prediction.