943 resultados para Botany -- Experiments
Resumo:
Recent observations of the temperature anisotropies of the cosmic microwave background (CMB) favor an inflationary paradigm in which the scale factor of the universe inflated by many orders of magnitude at some very early time. Such a scenario would produce the observed large-scale isotropy and homogeneity of the universe, as well as the scale-invariant perturbations responsible for the observed (10 parts per million) anisotropies in the CMB. An inflationary epoch is also theorized to produce a background of gravitational waves (or tensor perturbations), the effects of which can be observed in the polarization of the CMB. The E-mode (or parity even) polarization of the CMB, which is produced by scalar perturbations, has now been measured with high significance. Con- trastingly, today the B-mode (or parity odd) polarization, which is sourced by tensor perturbations, has yet to be observed. A detection of the B-mode polarization of the CMB would provide strong evidence for an inflationary epoch early in the universe’s history.
In this work, we explore experimental techniques and analysis methods used to probe the B- mode polarization of the CMB. These experimental techniques have been used to build the Bicep2 telescope, which was deployed to the South Pole in 2009. After three years of observations, Bicep2 has acquired one of the deepest observations of the degree-scale polarization of the CMB to date. Similarly, this work describes analysis methods developed for the Bicep1 three-year data analysis, which includes the full data set acquired by Bicep1. This analysis has produced the tightest constraint on the B-mode polarization of the CMB to date, corresponding to a tensor-to-scalar ratio estimate of r = 0.04±0.32, or a Bayesian 95% credible interval of r < 0.70. These analysis methods, in addition to producing this new constraint, are directly applicable to future analyses of Bicep2 data. Taken together, the experimental techniques and analysis methods described herein promise to open a new observational window into the inflationary epoch and the initial conditions of our universe.
Resumo:
Seismic reflection methods have been extensively used to probe the Earth's crust and suggest the nature of its formative processes. The analysis of multi-offset seismic reflection data extends the technique from a reconnaissance method to a powerful scientific tool that can be applied to test specific hypotheses. The treatment of reflections at multiple offsets becomes tractable if the assumptions of high-frequency rays are valid for the problem being considered. Their validity can be tested by applying the methods of analysis to full wave synthetics.
Three studies illustrate the application of these principles to investigations of the nature of the crust in southern California. A survey shot by the COCORP consortium in 1977 across the San Andreas fault near Parkfield revealed events in the record sections whose arrival time decreased with offset. The reflectors generating these events are imaged using a multi-offset three-dimensional Kirchhoff migration. Migrations of full wave acoustic synthetics having the same limitations in geometric coverage as the field survey demonstrate the utility of this back projection process for imaging. The migrated depth sections show the locations of the major physical boundaries of the San Andreas fault zone. The zone is bounded on the southwest by a near-vertical fault juxtaposing a Tertiary sedimentary section against uplifted crystalline rocks of the fault zone block. On the northeast, the fault zone is bounded by a fault dipping into the San Andreas, which includes slices of serpentinized ultramafics, intersecting it at 3 km depth. These interpretations can be made despite complications introduced by lateral heterogeneities.
In 1985 the Calcrust consortium designed a survey in the eastern Mojave desert to image structures in both the shallow and the deep crust. Preliminary field experiments showed that the major geophysical acquisition problem to be solved was the poor penetration of seismic energy through a low-velocity surface layer. Its effects could be mitigated through special acquisition and processing techniques. Data obtained from industry showed that quality data could be obtained from areas having a deeper, older sedimentary cover, causing a re-definition of the geologic objectives. Long offset stationary arrays were designed to provide reversed, wider angle coverage of the deep crust over parts of the survey. The preliminary field tests and constant monitoring of data quality and parameter adjustment allowed 108 km of excellent crustal data to be obtained.
This dataset, along with two others from the central and western Mojave, was used to constrain rock properties and the physical condition of the crust. The multi-offset analysis proceeded in two steps. First, an increase in reflection peak frequency with offset is indicative of a thinly layered reflector. The thickness and velocity contrast of the layering can be calculated from the spectral dispersion, to discriminate between structures resulting from broad scale or local effects. Second, the amplitude effects at different offsets of P-P scattering from weak elastic heterogeneities indicate whether the signs of the changes in density, rigidity, and Lame's parameter at the reflector agree or are opposed. The effects of reflection generation and propagation in a heterogeneous, anisotropic crust were contained by the design of the experiment and the simplicity of the observed amplitude and frequency trends. Multi-offset spectra and amplitude trend stacks of the three Mojave Desert datasets suggest that the most reflective structures in the middle crust are strong Poisson's ratio (σ) contrasts. Porous zones or the juxtaposition of units of mutually distant origin are indicated. Heterogeneities in σ increase towards the top of a basal crustal zone at ~22 km depth. The transition to the basal zone and to the mantle include increases in σ. The Moho itself includes ~400 m layering having a velocity higher than that of the uppermost mantle. The Moho maintains the same configuration across the Mojave despite 5 km of crustal thinning near the Colorado River. This indicates that Miocene extension there either thinned just the basal zone, or that the basal zone developed regionally after the extensional event.
Resumo:
This work is divided into two independent papers.
PAPER 1.
Spall velocities were measured for nine experimental impacts into San Marcos gabbro targets. Impact velocities ranged from 1 to 6.5 km/sec. Projectiles were iron, aluminum, lead, and basalt of varying sizes. The projectile masses ranged from a 4 g lead bullet to a 0.04 g aluminum sphere. The velocities of fragments were measured from high-speed films taken of the events. The maximum spall velocity observed was 30 m/sec, or 0.56 percent of the 5.4 km/sec impact velocity. The measured velocities were compared to the spall velocities predicted by the spallation model of Melosh (1984). The compatibility between the spallation model for large planetary impacts and the results of these small scale experiments are considered in detail.
The targets were also bisected to observe the pattern of internal fractures. A series of fractures were observed, whose location coincided with the boundary between rock subjected to the peak shock compression and a theoretical "near surface zone" predicted by the spallation model. Thus, between this boundary and the free surface, the target material should receive reduced levels of compressive stress as compared to the more highly shocked region below.
PAPER 2.
Carbonate samples from the nuclear explosion crater, OAK, and a terrestrial impact crater, Meteor Crater, were analyzed for shock damage using electron para- magnetic resonance, EPR. The first series of samples for OAK Crater were obtained from six boreholes within the crater, and the second series were ejecta samples recovered from the crater floor. The degree of shock damage in the carbonate material was assessed by comparing the sample spectra to spectra of Solenhofen limestone, which had been shocked to known pressures.
The results of the OAK borehole analysis have identified a thin zone of highly shocked carbonate material underneath the crater floor. This zone has a maximum depth of approximately 200 ft below sea floor at the ground zero borehole and decreases in depth towards the crater rim. A layer of highly shocked material is also found on the surface in the vicinity of the reference bolehole, located outside the crater. This material could represent a fallout layer. The ejecta samples have experienced a range of shock pressures.
It was also demonstrated that the EPR technique is feasible for the study of terrestrial impact craters formed in carbonate bedrock. The results for the Meteor Crater analysis suggest a slight degree of shock damage present in the β member of the Kaibab Formation exposed in the crater walls.
Resumo:
Sources and effects of astrophysical gravitational radiation are explained briefly to motivate discussion of the Caltech 40 meter antenna, which employs laser interferometry to monitor proper distances between inertial test masses. Practical considerations in construction of the apparatus are described. Redesign of test mass systems has resulted in a reduction of noise from internal mass vibrations by up to two orders of magnitude at some frequencies. A laser frequency stabilization system was developed which corrects the frequency of an argon ion laser to a residual fluctuation level bounded by the spectral density √s_v(f) ≤ 60µHz/√Hz, at fluctuation frequencies near 1.2 kHz. These and other improvements have contributed to reducing the spectral density of equivalent gravitational wave strain noise to √s_h(f)≈10^(-19)/√ Hz at these frequencies.
Finally, observations made with the antenna in February and March of 1987 are described. Kilohertz-band gravitational waves produced by the remnant of the recent supernova are shown to be theoretically unlikely at the strength required for confident detection in this antenna (then operating at poorer sensitivity than that quoted above). A search for periodic waves in the recorded data, comprising Fourier analysis of four 105-second samples of the antenna strain signal, was used to place new upper limits on periodic gravitational radiation at frequencies between 305 Hz and 5 kHz. In particular, continuous waves of any polarization are ruled out above strain amplitudes of 1.2 x 10^(-18) R.M.S. for waves emanating from the direction of the supernova, and 6.2 x 10^(-19) R.M.S. for waves emanating from the galactic center, between 1.5 and 4 kilohertz. Between 305 Hz and 5kHz no strains greater than 1.2 x 10^(-17) R.M.S. were detected from either direction. Limitations of the analysis and potential improvements are discussed, as are prospects for future searches.
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.
Resumo:
Detailed pulsed neutron measurements have been performed in graphite assemblies ranging in size from 30.48 cm x 38.10 cm x 38.10 cm to 91.44 cm x 66.67 cm x 66.67 cm. Results of the measurement have been compared to a modeled theoretical computation.
In the first set of experiments, we measured the effective decay constant of the neutron population in ten graphite stacks as a function of time after the source burst. We found the decay to be non-exponential in the six smallest assemblies, while in three larger assemblies the decay was exponential over a significant portion of the total measuring interval. The decay in the largest stack was exponential over the entire ten millisecond measuring interval. The non-exponential decay mode occurred when the effective decay constant exceeded 1600 sec^( -1).
In a second set of experiments, we measured the spatial dependence of the neutron population in four graphite stacks as a function of time after the source pulse. By doing an harmonic analysis of the spatial shape of the neutron distribution, we were able to compute the effective decay constants of the first two spatial modes. In addition, we were able to compute the time dependent effective wave number of neutron distribution in the stacks.
Finally, we used a Laplace transform technique and a simple modeled scattering kernel to solve a diffusion equation for the time and energy dependence of the neutron distribution in the graphite stacks. Comparison of these theoretical results with the results of the first set of experiments indicated that more exact theoretical analysis would be required to adequately describe the experiments.
The implications of our experimental results for the theory of pulsed neutron experiments in polycrystalline media are discussed in the last chapter.
Resumo:
Strength at extreme pressures (>1 Mbar or 100 GPa) and high strain rates (106-108 s-1) of materials is not well characterized. The goal of the research outlined in this thesis is to study the strength of tantalum (Ta) at these conditions. The Omega Laser in the Laboratory for Laser Energetics in Rochester, New York is used to create such extreme conditions. Targets are designed with ripples or waves on the surface, and these samples are subjected to high pressures using Omega’s high energy laser beams. In these experiments, the observational parameter is the Richtmyer-Meshkov (RM) instability in the form of ripple growth on single-mode ripples. The experimental platform used for these experiments is the “ride-along” laser compression recovery experiments, which provide a way to recover the specimens having been subjected to high pressures. Six different experiments are performed on the Omega laser using single-mode tantalum targets at different laser energies. The energy indicates the amount of laser energy that impinges the target. For each target, values for growth factor are obtained by comparing the profile of ripples before and after the experiment. With increasing energy, the growth factor increased.
Engineering simulations are used to interpret and correlate the measurements of growth factor to a measure of strength. In order to validate the engineering constitutive model for tantalum, a series of simulations are performed using the code Eureka, based on the Optimal Transportation Meshfree (OTM) method. Two different configurations are studied in the simulations: RM instabilities in single and multimode ripples. Six different simulations are performed for the single ripple configuration of the RM instability experiment, with drives corresponding to laser energies used in the experiments. Each successive simulation is performed at higher drive energy, and it is observed that with increasing energy, the growth factor increases. Overall, there is favorable agreement between the data from the simulations and the experiments. The peak growth factors from the simulations and the experiments are within 10% agreement. For the multimode simulations, the goal is to assist in the design of the laser driven experiments using the Omega laser. A series of three-mode and four-mode patterns are simulated at various energies and the resulting growth of the RM instability is computed. Based on the results of the simulations, a configuration is selected for the multimode experiments. These simulations also serve as validation for the constitutive model and the material parameters for tantalum that are used in the simulations.
By designing samples with initial perturbations in the form of single-mode and multimode ripples and subjecting these samples to high pressures, the Richtmyer-Meshkov instability is investigated in both laser compression experiments and simulations. By correlating the growth of these ripples to measures of strength, a better understanding of the strength of tantalum at high pressures is achieved.
Resumo:
A major part of the support for fundamental research on aquatic ecosystems continues to be provided by the Natural Environment Research Council (NERC). Funds are released for ”thematic” studies in a selected special topic or programme. ”Testable Models of Aquatic Ecosystems” was a Special Topic of the NERC, initiated in 1995, the aim of which was to promote ecological modelling by making new links between experimental aquatic biologists and state-of-the-art modellers. The Topic covered both marine and freshwater systems. This paper summarises projects on aspects of the responses of individual organisms to the effects of environmental variability, on the assembly, permanence and resilience of communities, and on aspects of spatial models. The authors conclude that the NERC Special Topic has been highly successful in promoting the development and application of models, most particularly through the interplay between experimental ecologists and formal modellers.
Resumo:
Ponds and shallow lakes are likely to be strongly affected by climate change, and by increase in environmental temperature in particular. Hydrological regimes and nutrient cycling may be altered, plant and animal communities may undergo changes in both composition and dynamics, and long-term and difficult to reverse switches between alternative stable equilibria may occur. A thorough understanding of the potential effects of increased temperature on ponds and shallow lakes is desirable because these ecosystems are of immense importance throughout the world as sources of drinking water, and for their amenity and conservation value. This understanding can only come through experimental studies in which the effects of different temperature regimes are compared. This paper reports design details and operating characteristics of a recently constructed experimental facility consisting of 48 aquatic microcosms which mimic the pond and shallow lake environment. Thirty-two of the microcosms can be heated and regulated to simulate climate change scenarios, including those predicted for the UK. The authors also summarise the current and future experimental uses of the microcosms.
Resumo:
Pseudo-thermal light has been widely used in ghost imaging experiments. In order to understand the differences between the pseudo-thermal source and thermal source, we propose a method to investigate whether a light source has cross spectral purity (CSP), and experimentally measure the cross spectral properties of the pseudo-thermal light source in near-field and far-field zones. Moreover we present a theoretical analysis of the cross spectral influence on ghost imaging. (c) 2006 Elsevier B.V. All rights reserved.