945 resultados para Precision Xtra®


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Observations of solar energetic particles (SEPs) from 22 solar flares in the 1977-1982 time period are reported. The observations were made by the Cosmic Ray Subsystem on board the Voyager 1 and 2 spacecraft. SEP abundances have been obtained for all elements with 3 ≤ Z ≤ 30 except Li, Be, B. F, Sc, V, Co and Cu. for which upper limits have been obtained. Statistically meaningful abundances of several rare elements (e.g., P, Cl, K, Ti, Mn) have been determined for the first time, and the average abundances of the more abundant elements have been determined with improved precision, typically a factor of three better than the best previous determinations.

Previously reported results concerning the dependence of the fractionation of SEPs relative to photosphere on first ionization potential (FIP) have been confirmed and amplified upon with the new data. The monotonic Z-dependence of the variation between flares noted by earlier studies was found to be interpretable as a fractionation, produced by acceleration of the particles from the corona and their propagation through interplanetary space, which is ordered by the ionic charge-to-mass ratio Q/ M of the species making up the SEPs. It was found that Q/M is the primary organizing parameter of acceleration and propagation effects in SEPs, as evidenced by the dependence on Q/M of time, spatial and energy dependence within flares and of the abundance variability from flare to flare.

An unfractionated coronal composition was derived by applying a simple Q/M fractionation correction to the observed average SEP composition, to simultaneously correct for all Q/M-correlated acceleration/propagation fractionation of SEPs. The resulting coronal composition agrees well with current XUV/X-ray spectroscopic measurements of coronal composition but is of much higher precision and is available for a much larger set of elements. Compared to spectroscopic photospheric abundances, the SEP-derived corona appears depleted in C and somewhat enriched in Cr (and possibly Ca and Ti).

An unfractionated photospheric composition was derived by applying a simple FIP fractionation correction to the derived coronal composition, to correct for the FIP-associated fractionation of the corona during its formation from photospheric material. The resulting composition agrees well with the photospheric abundance tabulation of Grevesse (1984) except for an at least 50% lower abundance of C and a significantly greater abundance of Cr and possibly Ti. The results support the Grevesse photospheric Fe abundance, about 50% higher than meteoritic and earlier solar values. The SEP-derived photospheric composition is not generally of higher precision than the available spectroscopic data, but it relies on fewer physical parameters and is available for some elements (C, N, Ne, Ar) which cannot be measured spectroscopically in the photosphere.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Researchers have spent decades refining and improving their methods for fabricating smaller, finer-tuned, higher-quality nanoscale optical elements with the goal of making more sensitive and accurate measurements of the world around them using optics. Quantum optics has been a well-established tool of choice in making these increasingly sensitive measurements which have repeatedly pushed the limits on the accuracy of measurement set forth by quantum mechanics. A recent development in quantum optics has been a creative integration of robust, high-quality, and well-established macroscopic experimental systems with highly-engineerable on-chip nanoscale oscillators fabricated in cleanrooms. However, merging large systems with nanoscale oscillators often require them to have extremely high aspect-ratios, which make them extremely delicate and difficult to fabricate with an "experimentally reasonable" repeatability, yield and high quality. In this work we give an overview of our research, which focused on microscopic oscillators which are coupled with macroscopic optical cavities towards the goal of cooling them to their motional ground state in room temperature environments. The quality factor of a mechanical resonator is an important figure of merit for various sensing applications and observing quantum behavior. We demonstrated a technique for pushing the quality factor of a micromechanical resonator beyond conventional material and fabrication limits by using an optical field to stiffen and trap a particular motional mode of a nanoscale oscillator. Optical forces increase the oscillation frequency by storing most of the mechanical energy in a nearly loss-less optical potential, thereby strongly diluting the effects of material dissipation. By placing a 130 nm thick SiO2 pendulum in an optical standing wave, we achieve an increase in the pendulum center-of-mass frequency from 6.2 to 145 kHz. The corresponding quality factor increases 50-fold from its intrinsic value to a final value of Qm = 5.8(1.1) x 105, representing more than an order of magnitude improvement over the conventional limits of SiO2 for a pendulum geometry. Our technique may enable new opportunities for mechanical sensing and facilitate observations of quantum behavior in this class of mechanical systems. We then give a detailed overview of the techniques used to produce high-aspect-ratio nanostructures with applications in a wide range of quantum optics experiments. The ability to fabricate such nanodevices with high precision opens the door to a vast array of experiments which integrate macroscopic optical setups with lithographically engineered nanodevices. Coupled with atom-trapping experiments in the Kimble Lab, we use these techniques to realize a new waveguide chip designed to address ultra-cold atoms along lithographically patterned nanobeams which have large atom-photon coupling and near 4π Steradian optical access for cooling and trapping atoms. We describe a fully integrated and scalable design where cold atoms are spatially overlapped with the nanostring cavities in order to observe a resonant optical depth of d0 ≈ 0.15. The nanodevice illuminates new possibilities for integrating atoms into photonic circuits and engineering quantum states of atoms and light on a microscopic scale. We then describe our work with superconducting microwave resonators coupled to a phononic cavity towards the goal of building an integrated device for quantum-limited microwave-to-optical wavelength conversion. We give an overview of our characterizations of several types of substrates for fabricating a low-loss high-frequency electromechanical system. We describe our electromechanical system fabricated on a Si3N4 membrane which consists of a 12 GHz superconducting LC resonator coupled capacitively to the high frequency localized modes of a phononic nanobeam. Using our suspended membrane geometry we isolate our system from substrates with significant loss tangents, drastically reducing the parasitic capacitance of our superconducting circuit to ≈ 2.5$ fF. This opens up a number of possibilities in making a new class of low-loss high-frequency electromechanics with relatively large electromechanical coupling. We present our substrate studies, fabrication methods, and device characterization.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis we build a novel analysis framework to perform the direct extraction of all possible effective Higgs boson couplings to the neutral electroweak gauge bosons in the H → ZZ(*) → 4l channel also referred to as the golden channel. We use analytic expressions of the full decay differential cross sections for the H → VV' → 4l process, and the dominant irreducible standard model qq ̄ → 4l background where 4l = 2e2μ,4e,4μ. Detector effects are included through an explicit convolution of these analytic expressions with transfer functions that model the detector responses as well as acceptance and efficiency effects. Using the full set of decay observables, we construct an unbinned 8-dimensional detector level likelihood function which is con- tinuous in the effective couplings, and includes systematics. All potential anomalous couplings of HVV' where V = Z,γ are considered, allowing for general CP even/odd admixtures and any possible phases. We measure the CP-odd mixing between the tree-level HZZ coupling and higher order CP-odd couplings to be compatible with zero, and in the range [−0.40, 0.43], and the mixing between HZZ tree-level coupling and higher order CP -even coupling to be in the ranges [−0.66, −0.57] ∪ [−0.15, 1.00]; namely compatible with a standard model Higgs. We discuss the expected precision in determining the various HVV' couplings in future LHC runs. A powerful and at first glance surprising prediction of the analysis is that with 100-400 fb-1, the golden channel will be able to start probing the couplings of the Higgs boson to diphotons in the 4l channel. We discuss the implications and further optimization of the methods for the next LHC runs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is often difficult to define ‘water quality’ with any degree of precision. One approach is that suggested by Battarbee (1997) and is based on the extent to which individual lakes have changed compared with their natural ‘baseline’ status. Defining the base-line status of artificial lakes and reservoirs however, is, very difficult. In ecological terms, the definition of quality must include some consideration of their functional characteristics and the extent to which these characteristics are self-sustaining. The challenge of managing lakes in a sustainable way is particularly acute in semi-arid, Mediterranean countries. Here the quality of the water is strongly influenced by the unpredictability of the rainfall as well as year-to-year variations in the seasonal averages. Wise management requires profound knowledge of how these systems function. Thus a holistic approach must be adopted and the factors influencing the seasonal dynamics of the lakes quantified over a range of spatial and temporal scales. In this article, the authors describe some of the ways in which both long-term and short-term changes in the weather have influenced the seasonal and spatial dynamics of phytoplankton in El Gergal, a water supply reservoir situated in the south of Spain. The quality of the water stored in this reservoir is typically very good but surface blooms of algae commonly appear during warm, calm periods when the water level is low. El Gergal reservoir is managed by the Empresa Municipal de Abastecimiento y Saneamiento (EMASESA) and supplies water for domestic, commercial and industrial use to an area which includes the city of Seville and twelve of its surrounding towns (ca. 1.3 million inhabitants). El Gergal is the last of two reservoirs in a chain of four situated in the Rivera de Huelva basin, a tributary of the Guadalquivir river. It was commissioned by EMASESA in 1979 and since then the company has monitored its main limnological parameters on, at least, a monthly basis and used this information to improve the management of the reservoir. As a consequence of these intensive studies the physical, chemical and biological information acquired during this period makes the El Gergal database one of the most complete in Spain. In this article the authors focus on three ‘weather-related’ effects that have had a significant impact on the composition and distribution of phytoplankton in El Gergal: (i) the changes associated with severe droughts; (ii) the spatial variations produced by short-term changes in the weather; (iii) the impact of water transfers on the seasonal dynamics of the dinoflagellate Ceratium.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The current power grid is on the cusp of modernization due to the emergence of distributed generation and controllable loads, as well as renewable energy. On one hand, distributed and renewable generation is volatile and difficult to dispatch. On the other hand, controllable loads provide significant potential for compensating for the uncertainties. In a future grid where there are thousands or millions of controllable loads and a large portion of the generation comes from volatile sources like wind and solar, distributed control that shifts or reduces the power consumption of electric loads in a reliable and economic way would be highly valuable.

Load control needs to be conducted with network awareness. Otherwise, voltage violations and overloading of circuit devices are likely. To model these effects, network power flows and voltages have to be considered explicitly. However, the physical laws that determine power flows and voltages are nonlinear. Furthermore, while distributed generation and controllable loads are mostly located in distribution networks that are multiphase and radial, most of the power flow studies focus on single-phase networks.

This thesis focuses on distributed load control in multiphase radial distribution networks. In particular, we first study distributed load control without considering network constraints, and then consider network-aware distributed load control.

Distributed implementation of load control is the main challenge if network constraints can be ignored. In this case, we first ignore the uncertainties in renewable generation and load arrivals, and propose a distributed load control algorithm, Algorithm 1, that optimally schedules the deferrable loads to shape the net electricity demand. Deferrable loads refer to loads whose total energy consumption is fixed, but energy usage can be shifted over time in response to network conditions. Algorithm 1 is a distributed gradient decent algorithm, and empirically converges to optimal deferrable load schedules within 15 iterations.

We then extend Algorithm 1 to a real-time setup where deferrable loads arrive over time, and only imprecise predictions about future renewable generation and load are available at the time of decision making. The real-time algorithm Algorithm 2 is based on model-predictive control: Algorithm 2 uses updated predictions on renewable generation as the true values, and computes a pseudo load to simulate future deferrable load. The pseudo load consumes 0 power at the current time step, and its total energy consumption equals the expectation of future deferrable load total energy request.

Network constraints, e.g., transformer loading constraints and voltage regulation constraints, bring significant challenge to the load control problem since power flows and voltages are governed by nonlinear physical laws. Remarkably, distribution networks are usually multiphase and radial. Two approaches are explored to overcome this challenge: one based on convex relaxation and the other that seeks a locally optimal load schedule.

To explore the convex relaxation approach, a novel but equivalent power flow model, the branch flow model, is developed, and a semidefinite programming relaxation, called BFM-SDP, is obtained using the branch flow model. BFM-SDP is mathematically equivalent to a standard convex relaxation proposed in the literature, but numerically is much more stable. Empirical studies show that BFM-SDP is numerically exact for the IEEE 13-, 34-, 37-, 123-bus networks and a real-world 2065-bus network, while the standard convex relaxation is numerically exact for only two of these networks.

Theoretical guarantees on the exactness of convex relaxations are provided for two types of networks: single-phase radial alternative-current (AC) networks, and single-phase mesh direct-current (DC) networks. In particular, for single-phase radial AC networks, we prove that a second-order cone program (SOCP) relaxation is exact if voltage upper bounds are not binding; we also modify the optimal load control problem so that its SOCP relaxation is always exact. For single-phase mesh DC networks, we prove that an SOCP relaxation is exact if 1) voltage upper bounds are not binding, or 2) voltage upper bounds are uniform and power injection lower bounds are strictly negative; we also modify the optimal load control problem so that its SOCP relaxation is always exact.

To seek a locally optimal load schedule, a distributed gradient-decent algorithm, Algorithm 9, is proposed. The suboptimality gap of the algorithm is rigorously characterized and close to 0 for practical networks. Furthermore, unlike the convex relaxation approach, Algorithm 9 ensures a feasible solution. The gradients used in Algorithm 9 are estimated based on a linear approximation of the power flow, which is derived with the following assumptions: 1) line losses are negligible; and 2) voltages are reasonably balanced. Both assumptions are satisfied in practical distribution networks. Empirical results show that Algorithm 9 obtains 70+ times speed up over the convex relaxation approach, at the cost of a suboptimality within numerical precision.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

These studies explore how, where, and when representations of variables critical to decision-making are represented in the brain. In order to produce a decision, humans must first determine the relevant stimuli, actions, and possible outcomes before applying an algorithm that will select an action from those available. When choosing amongst alternative stimuli, the framework of value-based decision-making proposes that values are assigned to the stimuli and that these values are then compared in an abstract “value space” in order to produce a decision. Despite much progress, in particular regarding the pinpointing of ventromedial prefrontal cortex (vmPFC) as a region that encodes the value, many basic questions remain. In Chapter 2, I show that distributed BOLD signaling in vmPFC represents the value of stimuli under consideration in a manner that is independent of the type of stimulus it is. Thus the open question of whether value is represented in abstraction, a key tenet of value-based decision-making, is confirmed. However, I also show that stimulus-dependent value representations are also present in the brain during decision-making and suggest a potential neural pathway for stimulus-to-value transformations that integrates these two results.

More broadly speaking, there is both neural and behavioral evidence that two distinct control systems are at work during action selection. These two systems compose the “goal-directed system”, which selects actions based on an internal model of the environment, and the “habitual” system, which generates responses based on antecedent stimuli only. Computational characterizations of these two systems imply that they have different informational requirements in terms of input stimuli, actions, and possible outcomes. Associative learning theory predicts that the habitual system should utilize stimulus and action information only, while goal-directed behavior requires that outcomes as well as stimuli and actions be processed. In Chapter 3, I test whether areas of the brain hypothesized to be involved in habitual versus goal-directed control represent the corresponding theorized variables.

The question of whether one or both of these neural systems drives Pavlovian conditioning is less well-studied. Chapter 4 describes an experiment in which subjects were scanned while engaged in a Pavlovian task with a simple non-trivial structure. After comparing a variety of model-based and model-free learning algorithms (thought to underpin goal-directed and habitual decision-making, respectively), it was found that subjects’ reaction times were better explained by a model-based system. In addition, neural signaling of precision, a variable based on a representation of a world model, was found in the amygdala. These data indicate that the influence of model-based representations of the environment can extend even to the most basic learning processes.

Knowledge of the state of hidden variables in an environment is required for optimal inference regarding the abstract decision structure of a given environment and therefore can be crucial to decision-making in a wide range of situations. Inferring the state of an abstract variable requires the generation and manipulation of an internal representation of beliefs over the values of the hidden variable. In Chapter 5, I describe behavioral and neural results regarding the learning strategies employed by human subjects in a hierarchical state-estimation task. In particular, a comprehensive model fit and comparison process pointed to the use of "belief thresholding". This implies that subjects tended to eliminate low-probability hypotheses regarding the state of the environment from their internal model and ceased to update the corresponding variables. Thus, in concert with incremental Bayesian learning, humans explicitly manipulate their internal model of the generative process during hierarchical inference consistent with a serial hypothesis testing strategy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We develop a method for performing one-loop calculations in finite systems that is based on using the WKB approximation for the high energy states. This approximation allows us to absorb all the counterterms analytically and thereby avoids the need for extreme numerical precision that was required by previous methods. In addition, the local approximation makes this method well suited for self-consistent calculations. We then discuss the application of relativistic mean field methods to the atomic nucleus. Self-consistent, one loop calculations in the Walecka model are performed and the role of the vacuum in this model is analyzed. This model predicts that vacuum polarization effects are responsible for up to five percent of the local nucleon density. Within this framework the possible role of strangeness degrees of freedom is studied. We find that strangeness polarization can increase the kaon-nucleus scattering cross section by ten percent. By introducing a cutoff into the model, the dependence of the model on short-distance physics, where its validity is doubtful, is calculated. The model is very sensitive to cutoffs around one GeV.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

在星间半导体激光通信系统中,如何检测发射光束波面的质量是个较难处理的问题,为了较好地解决这一问题,在简单介绍白光横向双剪切干涉仪的基础上,报道了用此干涉仪对近衍射极限半导体激光光束波面的检测,在此基础上推导出计算远场发散度的公式。实验测得近场光束的波高差为0.2A,通过夫朗和费衍射求得光束的发散度仅为64.8μrad,这表明光束接近光学衍射极限。同时,表明双剪切干涉仪灵敏度高、实用性好。

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the advent of the laser in the year 1960, the field of optics experienced a renaissance from what was considered to be a dull, solved subject to an active area of development, with applications and discoveries which are yet to be exhausted 55 years later. Light is now nearly ubiquitous not only in cutting-edge research in physics, chemistry, and biology, but also in modern technology and infrastructure. One quality of light, that of the imparted radiation pressure force upon reflection from an object, has attracted intense interest from researchers seeking to precisely monitor and control the motional degrees of freedom of an object using light. These optomechanical interactions have inspired myriad proposals, ranging from quantum memories and transducers in quantum information networks to precision metrology of classical forces. Alongside advances in micro- and nano-fabrication, the burgeoning field of optomechanics has yielded a class of highly engineered systems designed to produce strong interactions between light and motion.

Optomechanical crystals are one such system in which the patterning of periodic holes in thin dielectric films traps both light and sound waves to a micro-scale volume. These devices feature strong radiation pressure coupling between high-quality optical cavity modes and internal nanomechanical resonances. Whether for applications in the quantum or classical domain, the utility of optomechanical crystals hinges on the degree to which light radiating from the device, having interacted with mechanical motion, can be collected and detected in an experimental apparatus consisting of conventional optical components such as lenses and optical fibers. While several efficient methods of optical coupling exist to meet this task, most are unsuitable for the cryogenic or vacuum integration required for many applications. The first portion of this dissertation will detail the development of robust and efficient methods of optically coupling optomechanical resonators to optical fibers, with an emphasis on fabrication processes and optical characterization.

I will then proceed to describe a few experiments enabled by the fiber couplers. The first studies the performance of an optomechanical resonator as a precise sensor for continuous position measurement. The sensitivity of the measurement, limited by the detection efficiency of intracavity photons, is compared to the standard quantum limit imposed by the quantum properties of the laser probe light. The added noise of the measurement is seen to fall within a factor of 3 of the standard quantum limit, representing an order of magnitude improvement over previous experiments utilizing optomechanical crystals, and matching the performance of similar measurements in the microwave domain.

The next experiment uses single photon counting to detect individual phonon emission and absorption events within the nanomechanical oscillator. The scattering of laser light from mechanical motion produces correlated photon-phonon pairs, and detection of the emitted photon corresponds to an effective phonon counting scheme. In the process of scattering, the coherence properties of the mechanical oscillation are mapped onto the reflected light. Intensity interferometry of the reflected light then allows measurement of the temporal coherence of the acoustic field. These correlations are measured for a range of experimental conditions, including the optomechanical amplification of the mechanics to a self-oscillation regime, and comparisons are drawn to a laser system for phonons. Finally, prospects for using phonon counting and intensity interferometry to produce non-classical mechanical states are detailed following recent proposals in literature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

提出了一种基于狭缝投影的位置传感技术,阐述了此技术的传感原理及其在精密定位中的应用。准直激光束照明的投影狭缝由一个透镜以掠入射角度投影在被测物体上,狭缝投影经过被测物体表面的反射和另一个透镜的成像在探测双缝上形成投影狭缝像。探测双缝放大成像在双像限探测器上,投影狭缝像透过探测双缝的光强分别被双像限探测器的两个像限所接收,通过检测双像限探测器两个像限上的光强获得被测物体的位置。实验验证了此传感技术的可行性,其位置重复测量偏差小于32nm(1σ)。

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Bio-orthogonal non-canonical amino acid tagging (BONCAT) is an analytical method that allows the selective analysis of the subset of newly synthesized cellular proteins produced in response to a biological stimulus. In BONCAT, cells are treated with the non-canonical amino acid L-azidohomoalanine (Aha), which is utilized in protein synthesis in place of methionine by wild-type translational machinery. Nascent, Aha-labeled proteins are selectively ligated to affinity tags for enrichment and subsequently identified via mass spectrometry. The work presented in this thesis exhibits advancements in and applications of the BONCAT technology that establishes it as an effective tool for analyzing proteome dynamics with time-resolved precision.

Chapter 1 introduces the BONCAT method and serves as an outline for the thesis as a whole. I discuss motivations behind the methodological advancements in Chapter 2 and the biological applications in Chapters 2 and 3.

Chapter 2 presents methodological developments that make BONCAT a proteomic tool capable of, in addition to identifying newly synthesized proteins, accurately quantifying rates of protein synthesis. I demonstrate that this quantitative BONCAT approach can measure proteome-wide patterns of protein synthesis at time scales inaccessible to alternative techniques.

In Chapter 3, I use BONCAT to study the biological function of the small RNA regulator CyaR in Escherichia coli. I correctly identify previously known CyaR targets, and validate several new CyaR targets, expanding the functional roles of the sRNA regulator.

In Chapter 4, I use BONCAT to measure the proteomic profile of the quorum sensing bacterium Vibrio harveyi during the time-dependent transition from individual- to group-behaviors. My analysis reveals new quorum-sensing-regulated proteins with diverse functions, including transcription factors, chemotaxis proteins, transport proteins, and proteins involved in iron homeostasis.

Overall, this work describes how to use BONCAT to perform quantitative, time-resolved proteomic analysis and demonstrates that these measurements can be used to study a broad range of biological processes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Stable isotope geochemistry is a valuable toolkit for addressing a broad range of problems in the geosciences. Recent technical advances provide information that was previously unattainable or provide unprecedented precision and accuracy. Two such techniques are site-specific stable isotope mass spectrometry and clumped isotope thermometry. In this thesis, I use site-specific isotope and clumped isotope data to explore natural gas development and carbonate reaction kinetics. In the first chapter, I develop an equilibrium thermodynamics model to calculate equilibrium constants for isotope exchange reactions in small organic molecules. This equilibrium data provides a framework for interpreting the more complex data in the later chapters. In the second chapter, I demonstrate a method for measuring site-specific carbon isotopes in propane using high-resolution gas source mass spectrometry. This method relies on the characteristic fragments created during electron ionization, in which I measure the relative isotopic enrichment of separate parts of the molecule. My technique will be applied to a range of organic compounds in the future. For the third chapter, I use this technique to explore diffusion, mixing, and other natural processes in natural gas basins. As time progresses and the mixture matures, different components like kerogen and oil contribute to the propane in a natural gas sample. Each component imparts a distinct fingerprint on the site-specific isotope distribution within propane that I can observe to understand the source composition and maturation of the basin. Finally, in Chapter Four, I study the reaction kinetics of clumped isotopes in aragonite. Despite its frequent use as a clumped isotope thermometer, the aragonite blocking temperature is not known. Using laboratory heating experiments, I determine that the aragonite clumped isotope thermometer has a blocking temperature of 50-100°C. I compare this result to natural samples from the San Juan Islands that exhibit a maximum clumped isotope temperature that matches this blocking temperature. This thesis presents a framework for measuring site-specific carbon isotopes in organic molecules and new constraints on aragonite reaction kinetics. This study represents the foundation of a future generation of geochemical tools for the study of complex geologic systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Precision polarimetry of the cosmic microwave background (CMB) has become a mainstay of observational cosmology. The ΛCDM model predicts a polarization of the CMB at the level of a few μK, with a characteristic E-mode pattern. On small angular scales, a B-mode pattern arises from the gravitational lensing of E-mode power by the large scale structure of the universe. Inflationary gravitational waves (IGW) may be a source of B-mode power on large angular scales, and their relative contribution to primordial fluctuations is parameterized by a tensor-to-scalar ratio r. BICEP2 and Keck Array are a pair of CMB polarimeters at the South Pole designed and built for optimal sensitivity to the primordial B-mode peak around multipole l ~ 100. The BICEP2/Keck Array program intends to achieve a sensitivity to r ≥ 0.02. Auxiliary science goals include the study of gravitational lensing of E-mode into B-mode signal at medium angular scales and a high precision survey of Galactic polarization. These goals require low noise and tight control of systematics. We describe the design and calibration of the instrument. We also describe the analysis of the first three years of science data. BICEP2 observes a significant B-mode signal at 150 GHz in excess of the level predicted by the lensed-ΛCDM model, and Keck Array confirms the excess signal at > 5σ. We combine the maps from the two experiments to produce 150 GHz Q and U maps which have a depth of 57 nK deg (3.4 μK arcmin) over an effective area of 400 deg2 for an equivalent survey weight of 248000 μK2. We also show preliminary Keck Array 95 GHz maps. A joint analysis with the Planck collaboration reveals that much of BICEP2/Keck Array's observed 150 GHz signal at low l is more likely a Galactic dust foreground than a measurement of r. Marginalizing over dust and r, lensing B-modes are detected at 7.0σ significance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The application of a Michelson interferometer with a self-pumped phase-conjugate mirror to measure small vibration amplitudes of a rough surface is described. The distorted wave front of the light that is diffusely reflected from the rough surface is restored by phase conjugation to provide an interference signal with a high signal-to-noise ratio. The vibration amplitudes of a stainless-steel sample are measured with a precision of similar to 5 nm. (C) 2000 Optical Society of America OCIS codes: 120.3180, 190.5040, 120.7280.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

讨论了光学微分方法在图像深度估计问题中的应用。基于线性成像理论对Farid提出的光学微分模型进行了推广,即用于图像深度估计的两幅图像在成像过程中可以满足任意阶的线性微分关系。此模型拓宽了光学微分的概念,使两次成像之间关系有了更多的光学微分形式。围绕如何选择合适的光学微分关系以使系统的整体性能达到最优,分析了光学成像系统的参量对于图像深度估计的精度以及纵向分辨力的影响,并且对光学微分方法中的关键光学元件—光学掩模板的构建方法及优化问题也作了初步的探讨。