978 resultados para temporal sequence
Resumo:
A Fourier analysis method is used to accurately determine not only the absolute phase but also the temporal-pulse phase of an isolated few-cycle (chirped) laser pulse. This method is independent of the pulse shape and can fully characterize the light wave even though only a few samples per optical cycle are available. It paves the way for investigating the absolute phase-dependent extreme nonlinear optics, and the evolutions of the absolute phase and the temporal-pulse phase of few-cycle laser pulses.
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.
Resumo:
We demonstrated that a synthesized laser field consisting of an intense long (45 fs, multi-optical-cycle) laser pulse and a weak short (7 fs, few-optical-cycle) laser pulse can control the electron dynamics and high-order harmonic generation in argon, and generate extreme ultraviolet supercontinuum towards the production of a single strong attosecond pulse. The long pulse offers a large amplitude field, and the short pulse creates a temporally narrow enhancement of the laser field and a gate for the highest energy harmonic emission. This scheme paves the way to generate intense isolated attosecond pulses with strong multi-optical-cycle laser pulses.
Resumo:
18 p.
Resumo:
[ES]Los radicales nitrato e hidroxilo son especies químicas implicadas en la contaminación atmosférica. En el presente trabajo se ha tratado de estimar sus concentraciones empleando para ello medidas de las concentraciones de varios compuestos orgánicos volátiles registradas en el Parque Natural de Valderejo (Araba). La metodología de cálculo ya había sido empleada anteriormente para el ·OH y es la primera vez que se ha aplicado a concentraciones de NO·3 en una zona rural. Las concentraciones de radical hidroxilo calculadas (6,02·106 - 8,06·106 molec. cm-3) concuerdan con las obtenidas en medidas y estudios anteriores. En el caso del radical nitrato, las concentraciones estimadas (2,13·1011 – 2,02·1012 molec. cm-3) son bastante superiores a las encontradas en la bibliografía, por lo que se ha concluido que esta técnica de medida no es válida para el cálculo de NO·3 en una atmósfera de fondo rural como Valderejo. Esta desviación se debe probablemente a otros procesos no contemplados en la hipótesis de cálculo, por lo que se propone continuar con el estudio en este área.
Resumo:
The Hamilton Jacobi Bellman (HJB) equation is central to stochastic optimal control (SOC) theory, yielding the optimal solution to general problems specified by known dynamics and a specified cost functional. Given the assumption of quadratic cost on the control input, it is well known that the HJB reduces to a particular partial differential equation (PDE). While powerful, this reduction is not commonly used as the PDE is of second order, is nonlinear, and examples exist where the problem may not have a solution in a classical sense. Furthermore, each state of the system appears as another dimension of the PDE, giving rise to the curse of dimensionality. Since the number of degrees of freedom required to solve the optimal control problem grows exponentially with dimension, the problem becomes intractable for systems with all but modest dimension.
In the last decade researchers have found that under certain, fairly non-restrictive structural assumptions, the HJB may be transformed into a linear PDE, with an interesting analogue in the discretized domain of Markov Decision Processes (MDP). The work presented in this thesis uses the linearity of this particular form of the HJB PDE to push the computational boundaries of stochastic optimal control.
This is done by crafting together previously disjoint lines of research in computation. The first of these is the use of Sum of Squares (SOS) techniques for synthesis of control policies. A candidate polynomial with variable coefficients is proposed as the solution to the stochastic optimal control problem. An SOS relaxation is then taken to the partial differential constraints, leading to a hierarchy of semidefinite relaxations with improving sub-optimality gap. The resulting approximate solutions are shown to be guaranteed over- and under-approximations for the optimal value function. It is shown that these results extend to arbitrary parabolic and elliptic PDEs, yielding a novel method for Uncertainty Quantification (UQ) of systems governed by partial differential constraints. Domain decomposition techniques are also made available, allowing for such problems to be solved via parallelization and low-order polynomials.
The optimization-based SOS technique is then contrasted with the Separated Representation (SR) approach from the applied mathematics community. The technique allows for systems of equations to be solved through a low-rank decomposition that results in algorithms that scale linearly with dimensionality. Its application in stochastic optimal control allows for previously uncomputable problems to be solved quickly, scaling to such complex systems as the Quadcopter and VTOL aircraft. This technique may be combined with the SOS approach, yielding not only a numerical technique, but also an analytical one that allows for entirely new classes of systems to be studied and for stability properties to be guaranteed.
The analysis of the linear HJB is completed by the study of its implications in application. It is shown that the HJB and a popular technique in robotics, the use of navigation functions, sit on opposite ends of a spectrum of optimization problems, upon which tradeoffs may be made in problem complexity. Analytical solutions to the HJB in these settings are available in simplified domains, yielding guidance towards optimality for approximation schemes. Finally, the use of HJB equations in temporal multi-task planning problems is investigated. It is demonstrated that such problems are reducible to a sequence of SOC problems linked via boundary conditions. The linearity of the PDE allows us to pre-compute control policy primitives and then compose them, at essentially zero cost, to satisfy a complex temporal logic specification.
Resumo:
Understanding how transcriptional regulatory sequence maps to regulatory function remains a difficult problem in regulatory biology. Given a particular DNA sequence for a bacterial promoter region, we would like to be able to say which transcription factors bind there, how strongly they bind, and whether they interact with each other and/or RNA polymerase, with the ultimate objective of integrating knowledge of these parameters into a prediction of gene expression levels. The theoretical framework of statistical thermodynamics provides a useful framework for doing so, enabling us to predict how gene expression levels depend on transcription factor binding energies and concentrations. We used thermodynamic models, coupled with models of the sequence-dependent binding energies of transcription factors and RNAP, to construct a genotype to phenotype map for the level of repression exhibited by the lac promoter, and tested it experimentally using a set of promoter variants from E. coli strains isolated from different natural environments. For this work, we sought to ``reverse engineer'' naturally occurring promoter sequences to understand how variations in promoter sequence affects gene expression. The natural inverse of this approach is to ``forward engineer'' promoter sequences to obtain targeted levels of gene expression. We used a high precision model of RNAP-DNA sequence dependent binding energy, coupled with a thermodynamic model relating binding energy to gene expression, to predictively design and verify a suite of synthetic E. coli promoters whose expression varied over nearly three orders of magnitude.
However, although thermodynamic models enable predictions of mean levels of gene expression, it has become evident that cell-to-cell variability or ``noise'' in gene expression can also play a biologically important role. In order to address this aspect of gene regulation, we developed models based on the chemical master equation framework and used them to explore the noise properties of a number of common E. coli regulatory motifs; these properties included the dependence of the noise on parameters such as transcription factor binding strength and copy number. We then performed experiments in which these parameters were systematically varied and measured the level of variability using mRNA FISH. The results showed a clear dependence of the noise on these parameters, in accord with model predictions.
Finally, one shortcoming of the preceding modeling frameworks is that their applicability is largely limited to systems that are already well-characterized, such as the lac promoter. Motivated by this fact, we used a high throughput promoter mutagenesis assay called Sort-Seq to explore the completely uncharacterized transcriptional regulatory DNA of the E. coli mechanosensitive channel of large conductance (MscL). We identified several candidate transcription factor binding sites, and work is continuing to identify the associated proteins.
Resumo:
A tese analisa a relação íntima que há entre o pragmatismo ou o conseqüencialismo e a modulação temporal dos efeitos das decisões judiciais. Nesta relação, interessa ressaltar o ponto de interseção que certamente sobressai em várias ocasiões: o argumento de cunho econômico. Tal tipo de argumento pode assumir especial relevo quando do exame da oportunidade e conveniência na tomada das decisões eminentemente políticas. No âmbito jurisdicional, no entanto, o argumento pragmático ou consequencialista de cunho econômico não deve prevalecer como fundamento das decisões judiciais, especialmente cuidando-se de matéria tributária. Os problemas que centralizam o estudo podem ser colocados através das seguintes indagações: é possível que o Supremo Tribunal Federal compute, no julgamento de certa matéria tributária, argumento como o eventual rombo de X bilhões de reais que a decisão contrária ao Fisco possa acarretar para os cofres públicos? A fundamentação de eventual decisão judicial calcada exclusiva ou predominantemente em tal argumento é legítima ou ilegítima? Que importância pode ter na tomada de decisão judicial? Quando aplicada, há parâmetros a serem seguidos? Quais? Demonstramos que a prevalência de tal argumento é inadequada na seara judicial, ou seja, deve ter peso reduzido ou periférico, servindo para corroborar ou reforçar os argumentos jurídicos que centralizam o debate submetido ao exame do Poder Judiciário de modo geral, e do Supremo Tribunal Federal, de maneira particular. Em busca de esclarecer quais os principais limites e possibilidades de tal argumento, especialmente relacionando-o à modulação temporal dos efeitos da decisão judicial, explicitamos algumas regras necessárias para a sua adequada utilização, sob pena de inconcebível subversão de variados princípios e direitos fundamentais assegurados em sede constitucional. No exame das questões submetidas à apreciação da Corte Suprema em matéria tributária, o seu parâmetro consiste na maior efetividade e concretude ao texto constitucional. A modulação temporal dos efeitos se aplica a uma decisão que, declarando a inconstitucionalidade do ato normativo, se afastaria ainda mais da vontade constitucional, caso fosse aplicado o tradicional efeito ex tunc (retroativo até o nascimento da lei). Nestas situações específicas e excepcionais se justifica aplicar a modulação, com vistas a dar maior concretude e emprestar maior eficácia à Constituição. A tese proposta, ao final, consiste na reunião das regras explicitadas no trabalho e em proposta legislativa.
Resumo:
The σD values of nitrated cellulose from a variety of trees covering a wide geographic range have been measured. These measurements have been used to ascertain which factors are likely to cause σD variations in cellulose C-H hydrogen.
It is found that a primary source of tree σD variation is the σD variation of the environmental precipitation. Superimposed on this are isotopic variations caused by the transpiration of the leaf water incorporated by the tree. The magnitude of this transpiration effect appears to be related to relative humidity.
Within a single tree, it is found that the hydrogen isotope variations which occur for a ring sequence in one radial direction may not be exactly the same as those which occur in a different direction. Such heterogeneities appear most likely to occur in trees with asymmetric ring patterns that contain reaction wood. In the absence of reaction wood such heterogeneities do not seem to occur. Thus, hydrogen isotope analyses of tree ring sequences should be performed on trees which do not contain reaction wood.
Comparisons of tree σD variations with variations in local climate are performed on two levels: spatial and temporal. It is found that the σD values of 20 North American trees from a wide geographic range are reasonably well-correlated with the corresponding average annual temperature. The correlation is similar to that observed for a comparison of the σD values of annual precipitation of 11 North American sites with annual temperature. However, it appears that this correlation is significantly disrupted by trees which grew on poorly drained sites such as those in stagnant marshes. Therefore, site selection may be important in choosing trees for climatic interpretation of σD values, although proper sites do not seem to be uncommon.
The measurement of σD values in 5-year samples from the tree ring sequences of 13 trees from 11 North American sites reveals a variety of relationships with local climate. As it was for the spatial σD vs climate comparison, site selection is also apparently important for temporal tree σD vs climate comparisons. Again, it seems that poorly-drained sites are to be avoided. For nine trees from different "well-behaved" sites, it was found that the local climatic variable best related to the σD variations was not the same for all sites.
Two of these trees showed a strong negative correlation with the amount of local summer precipitation. Consideration of factors likely to influence the isotopic composition of summer rain suggests that rainfall intensity may be important. The higher the intensity, the lower the σD value. Such an effect might explain the negative correlation of σD vs summer precipitation amount for these two trees. A third tree also exhibited a strong correlation with summer climate, but in this instance it was a positive correlation of σD with summer temperature.
The remaining six trees exhibited the best correlation between σD values and local annual climate. However, in none of these six cases was it annual temperature that was the most important variable. In fact annual temperature commonly showed no relationship at all with tree σD values. Instead, it was found that a simple mass balance model incorporating two basic assumptions yielded parameters which produced the best relationships with tree σD values. First, it was assumed that the σD values of these six trees reflected the σD values of annual precipitation incorporated by these trees. Second, it was assumed that the σD value of the annual precipitation was a weighted average of two seasonal isotopic components: summer and winter. Mass balance equations derived from these assumptions yielded combinations of variables that commonly showed a relationship with tree σD values where none had previously been discerned.
It was found for these "well-behaved" trees that not all sample intervals in a σD vs local climate plot fell along a well-defined trend. These departures from the local σD VS climate norm were defined as "anomalous". Some of these anomalous intervals were common to trees from different locales. When such widespread commonalty of an anomalous interval occurred, it was observed that the interval corresponded to an interval in which drought had existed in the North American Great Plains.
Consequently, there appears to be a combination of both local and large scale climatic information in the σD variations of tree cellulose C-H hydrogen.
Resumo:
It is often difficult to define ‘water quality’ with any degree of precision. One approach is that suggested by Battarbee (1997) and is based on the extent to which individual lakes have changed compared with their natural ‘baseline’ status. Defining the base-line status of artificial lakes and reservoirs however, is, very difficult. In ecological terms, the definition of quality must include some consideration of their functional characteristics and the extent to which these characteristics are self-sustaining. The challenge of managing lakes in a sustainable way is particularly acute in semi-arid, Mediterranean countries. Here the quality of the water is strongly influenced by the unpredictability of the rainfall as well as year-to-year variations in the seasonal averages. Wise management requires profound knowledge of how these systems function. Thus a holistic approach must be adopted and the factors influencing the seasonal dynamics of the lakes quantified over a range of spatial and temporal scales. In this article, the authors describe some of the ways in which both long-term and short-term changes in the weather have influenced the seasonal and spatial dynamics of phytoplankton in El Gergal, a water supply reservoir situated in the south of Spain. The quality of the water stored in this reservoir is typically very good but surface blooms of algae commonly appear during warm, calm periods when the water level is low. El Gergal reservoir is managed by the Empresa Municipal de Abastecimiento y Saneamiento (EMASESA) and supplies water for domestic, commercial and industrial use to an area which includes the city of Seville and twelve of its surrounding towns (ca. 1.3 million inhabitants). El Gergal is the last of two reservoirs in a chain of four situated in the Rivera de Huelva basin, a tributary of the Guadalquivir river. It was commissioned by EMASESA in 1979 and since then the company has monitored its main limnological parameters on, at least, a monthly basis and used this information to improve the management of the reservoir. As a consequence of these intensive studies the physical, chemical and biological information acquired during this period makes the El Gergal database one of the most complete in Spain. In this article the authors focus on three ‘weather-related’ effects that have had a significant impact on the composition and distribution of phytoplankton in El Gergal: (i) the changes associated with severe droughts; (ii) the spatial variations produced by short-term changes in the weather; (iii) the impact of water transfers on the seasonal dynamics of the dinoflagellate Ceratium.
Resumo:
Until now observations on the temporal variation of size of freshwater copepods have not provided much information. Other observers only mention in passing this or that phenomenon from which it is possible to deduct termporal variations. In this study Cyclops strenuus s.l., a freshwater species of fairly wide distribution, is studied in two water bodies. The author studies the systematic, placing of inhabitants described as C. strenuus Fischer in both locations, their annual life cycle, and their annual size variations.