996 resultados para Temporal Precision
Resumo:
The temporal structure of neuronal spike trains in the visual cortex can provide detailed information about the stimulus and about the neuronal implementation of visual processing. Spike trains recorded from the macaque motion area MT in previous studies (Newsome et al., 1989a; Britten et al., 1992; Zohary et al., 1994) are analyzed here in the context of the dynamic random dot stimulus which was used to evoke them. If the stimulus is incoherent, the spike trains can be highly modulated and precisely locked in time to the stimulus. In contrast, the coherent motion stimulus creates little or no temporal modulation and allows us to study patterns in the spike train that may be intrinsic to the cortical circuitry in area MT. Long gaps in the spike train evoked by the preferred direction motion stimulus are found, and they appear to be symmetrical to bursts in the response to the anti-preferred direction of motion. A novel cross-correlation technique is used to establish that the gaps are correlated between pairs of neurons. Temporal modulation is also found in psychophysical experiments using a modified stimulus. A model is made that can account for the temporal modulation in terms of the computational theory of biological image motion processing. A frequency domain analysis of the stimulus reveals that it contains a repeated power spectrum that may account for psychophysical and electrophysiological observations.
Some neurons tend to fire bursts of action potentials while others avoid burst firing. Using numerical and analytical models of spike trains as Poisson processes with the addition of refractory periods and bursting, we are able to account for peaks in the power spectrum near 40 Hz without assuming the existence of an underlying oscillatory signal. A preliminary examination of the local field potential reveals that stimulus-locked oscillation appears briefly at the beginning of the trial.
Resumo:
Optical frequency combs (OFCs) provide direct phase-coherent link between optical and RF frequencies, and enable precision measurement of optical frequencies. In recent years, a new class of frequency combs (microcombs) have emerged based on parametric frequency conversions in dielectric microresonators. Micocombs have large line spacing from 10's to 100's GHz, allowing easy access to individual comb lines for arbitrary waveform synthesis. They also provide broadband parametric gain bandwidth, not limited by specific atomic or molecular transitions in conventional OFCs. The emerging applications of microcombs include low noise microwave generation, astronomical spectrograph calibration, direct comb spectroscopy, and high capacity telecommunications.
In this thesis, research is presented starting with the introduction of a new type of chemically etched, planar silica-on-silicon disk resonator. A record Q factor of 875 million is achieved for on-chip devices. A simple and accurate approach to characterize the FSR and dispersion of microcavities is demonstrated. Microresonator-based frequency combs (microcombs) are demonstrated with microwave repetition rate less than 80 GHz on a chip for the first time. Overall low threshold power (as low as 1 mW) of microcombs across a wide range of resonator FSRs from 2.6 to 220 GHz in surface-loss-limited disk resonators is demonstrated. The rich and complex dynamics of microcomb RF noise are studied. High-coherence, RF phase-locking of microcombs is demonstrated where injection locking of the subcomb offset frequencies are observed by pump-detuning-alignment. Moreover, temporal mode locking, featuring subpicosecond pulses from a parametric 22 GHz microcomb, is observed. We further demonstrated a shot-noise-limited white phase noise of microcomb for the first time. Finally, stabilization of the microcomb repetition rate is realized by phase lock loop control.
For another major nonlinear optical application of disk resonators, highly coherent, simulated Brillouin lasers (SBL) on silicon are also demonstrated, with record low Schawlow-Townes noise less than 0.1 Hz^2/Hz for any chip-based lasers and low technical noise comparable to commercial narrow-linewidth fiber lasers. The SBL devices are efficient, featuring more than 90% quantum efficiency and threshold as low as 60 microwatts. Moreover, novel properties of the SBL are studied, including cascaded operation, threshold tuning, and mode-pulling phenomena. Furthermore, high performance microwave generation using on-chip cascaded Brillouin oscillation is demonstrated. It is also robust enough to enable incorporation as the optical voltage-controlled-oscillator in the first demonstration of a photonic-based, microwave frequency synthesizer. Finally, applications of microresonators as frequency reference cavities and low-phase-noise optomechanical oscillators are presented.
Resumo:
The concept of seismogenic asperities and aseismic barriers has become a useful paradigm within which to understand the seismogenic behavior of major faults. Since asperities and barriers can be thought of as defining the potential rupture area of large megathrust earthquakes, it is thus important to identify their respective spatial extents, constrain their temporal longevity, and to develop a physical understanding for their behavior. Space geodesy is making critical contributions to the identification of slip asperities and barriers but progress in many geographical regions depends on improving the accuracy and precision of the basic measurements. This thesis begins with technical developments aimed at improving satellite radar interferometric measurements of ground deformation whereby we introduce an empirical correction algorithm for unwanted effects due to interferometric path delays that are due to spatially and temporally variable radar wave propagation speeds in the atmosphere. In chapter 2, I combine geodetic datasets with complementary spatio-temporal resolutions to improve our understanding of the spatial distribution of crustal deformation sources and their associated temporal evolution – here we use observations from Long Valley Caldera (California) as our test bed. In the third chapter I apply the tools developed in the first two chapters to analyze postseismic deformation associated with the 2010 Mw=8.8 Maule (Chile) earthquake. The result delimits patches where afterslip occurs, explores their relationship to coseismic rupture, quantifies frictional properties associated with inferred patches of afterslip, and discusses the relationship of asperities and barriers to long-term topography. The final chapter investigates interseismic deformation of the eastern Makran subduction zone by using satellite radar interferometry only, and demonstrates that with state-of-art techniques it is possible to quantify tectonic signals with small amplitude and long wavelength. Portions of the eastern Makran for which we estimate low fault coupling correspond to areas where bathymetric features on the downgoing plate are presently subducting, whereas the region of the 1945 M=8.1 earthquake appears to be more highly coupled.
Resumo:
We report a new pulse cleaning technique to enhance the contrast ratio of intense ultra-short laser pulses. A pulse temporal cleaner based on nonlinear ellipse rotation by using BK7 glass plate is developed, and a contrast ratio improvement of two orders of magnitude for the milli-joule level femtosecond input pulses is demonstrated, the total transmission efficiency of the pulse cleaner is 16.7%.
Resumo:
This thesis is motivated by safety-critical applications involving autonomous air, ground, and space vehicles carrying out complex tasks in uncertain and adversarial environments. We use temporal logic as a language to formally specify complex tasks and system properties. Temporal logic specifications generalize the classical notions of stability and reachability that are studied in the control and hybrid systems communities. Given a system model and a formal task specification, the goal is to automatically synthesize a control policy for the system that ensures that the system satisfies the specification. This thesis presents novel control policy synthesis algorithms for optimal and robust control of dynamical systems with temporal logic specifications. Furthermore, it introduces algorithms that are efficient and extend to high-dimensional dynamical systems.
The first contribution of this thesis is the generalization of a classical linear temporal logic (LTL) control synthesis approach to optimal and robust control. We show how we can extend automata-based synthesis techniques for discrete abstractions of dynamical systems to create optimal and robust controllers that are guaranteed to satisfy an LTL specification. Such optimal and robust controllers can be computed at little extra computational cost compared to computing a feasible controller.
The second contribution of this thesis addresses the scalability of control synthesis with LTL specifications. A major limitation of the standard automaton-based approach for control with LTL specifications is that the automaton might be doubly-exponential in the size of the LTL specification. We introduce a fragment of LTL for which one can compute feasible control policies in time polynomial in the size of the system and specification. Additionally, we show how to compute optimal control policies for a variety of cost functions, and identify interesting cases when this can be done in polynomial time. These techniques are particularly relevant for online control, as one can guarantee that a feasible solution can be found quickly, and then iteratively improve on the quality as time permits.
The final contribution of this thesis is a set of algorithms for computing feasible trajectories for high-dimensional, nonlinear systems with LTL specifications. These algorithms avoid a potentially computationally-expensive process of computing a discrete abstraction, and instead compute directly on the system's continuous state space. The first method uses an automaton representing the specification to directly encode a series of constrained-reachability subproblems, which can be solved in a modular fashion by using standard techniques. The second method encodes an LTL formula as mixed-integer linear programming constraints on the dynamical system. We demonstrate these approaches with numerical experiments on temporal logic motion planning problems with high-dimensional (10+ states) continuous systems.
Resumo:
Early embryogenesis in metazoa is controlled by maternally synthesized products. Among these products, the mature egg is loaded with transcripts representing approximately two thirds of the genome. A subset of this maternal RNA pool is degraded prior to the transition to zygotic control of development. This transfer of control of development from maternal to zygotic products is referred to as the midblastula transition (or MBT). It is believed that the degradation of maternal transcripts is required to terminate maternal control of development and to allow zygotic control of development to begin. Until now this process of maternal transcript degradation and the subsequent timing of the MBT has been poorly understood. I have demonstrated that in the early embryo there are two independent RNA degradation pathways, either of which is sufficient for transcript elimination. However, only the concerted action of both pathways leads to elimination of transcripts with the correct timing, at the MBT. The first pathway is maternally encoded, is triggered by egg activation, and is targeted to specific classes of mRNAs through cis-acting elements in the 3' untranslated region (UTR}. The second pathway is activated 2 hr after fertilization and functions together with the maternal pathway to ensure that transcripts are degraded by the MBT. In addition, some transcripts fail to degrade at select subcellular locations adding an element of spatial control to RNA degradation. The spatial control of RNA degradation is achieved by protecting, or masking, transcripts from the degradation machinery. The RNA degradation and protection events are regulated by distinct cis-elements in the 3' untranslated region (UTR). These results provide the first systematic dissection of this highly conserved process in development and demonstrate that RNA degradation is a novel mechanism used for both temporal and spatial control of development.
Resumo:
The imaging technology of stimulated emission depletion (STED) utilizes the nonlinearity relationship between the fluorescence saturation and the excited state stimulated depletion. It implements three-dimensional (3D) imaging and breaks the diffraction barrier of far-field light microscopy by restricting fluorescent molecules at a sub-diffraction spot. In order to improve the resolution which attained by this technology, the computer simulation on temporal behavior of population probabilities of the sample was made in this paper, and the optimized parameters such as intensity, duration and delay time of the STED pulse were given.
Resumo:
A Fourier analysis method is used to accurately determine not only the absolute phase but also the temporal-pulse phase of an isolated few-cycle (chirped) laser pulse. This method is independent of the pulse shape and can fully characterize the light wave even though only a few samples per optical cycle are available. It paves the way for investigating the absolute phase-dependent extreme nonlinear optics, and the evolutions of the absolute phase and the temporal-pulse phase of few-cycle laser pulses.
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.
Resumo:
We demonstrated that a synthesized laser field consisting of an intense long (45 fs, multi-optical-cycle) laser pulse and a weak short (7 fs, few-optical-cycle) laser pulse can control the electron dynamics and high-order harmonic generation in argon, and generate extreme ultraviolet supercontinuum towards the production of a single strong attosecond pulse. The long pulse offers a large amplitude field, and the short pulse creates a temporally narrow enhancement of the laser field and a gate for the highest energy harmonic emission. This scheme paves the way to generate intense isolated attosecond pulses with strong multi-optical-cycle laser pulses.
Resumo:
18 p.
Resumo:
[ES]Los radicales nitrato e hidroxilo son especies químicas implicadas en la contaminación atmosférica. En el presente trabajo se ha tratado de estimar sus concentraciones empleando para ello medidas de las concentraciones de varios compuestos orgánicos volátiles registradas en el Parque Natural de Valderejo (Araba). La metodología de cálculo ya había sido empleada anteriormente para el ·OH y es la primera vez que se ha aplicado a concentraciones de NO·3 en una zona rural. Las concentraciones de radical hidroxilo calculadas (6,02·106 - 8,06·106 molec. cm-3) concuerdan con las obtenidas en medidas y estudios anteriores. En el caso del radical nitrato, las concentraciones estimadas (2,13·1011 – 2,02·1012 molec. cm-3) son bastante superiores a las encontradas en la bibliografía, por lo que se ha concluido que esta técnica de medida no es válida para el cálculo de NO·3 en una atmósfera de fondo rural como Valderejo. Esta desviación se debe probablemente a otros procesos no contemplados en la hipótesis de cálculo, por lo que se propone continuar con el estudio en este área.