942 resultados para Subpixel precision


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The free neutron beta decay correlation A0 between neutron polarization and electron emission direction provides the strongest constraint on the ratio λ = gA/gV of the Axial-vector to Vector coupling constants in Weak decay. In conjunction with the CKM Matrix element Vud and the neutron lifetime τn, λ provides a test of Standard Model assumptions for the Weak interaction. Leading high-precision measurements of A0 and τn in the 1995-2005 time period showed discrepancies with prior measurements and Standard Model predictions for the relationship between λ, τn, and Vud. The UCNA experiment was developed to measure A0 from decay of polarized ultracold neutrons (UCN), providing a complementary determination of λ with different systematic uncertainties from prior cold neutron beam experiments. This dissertation describes analysis of the dataset collected by UCNA in 2010, with emphasis on detector response calibrations and systematics. The UCNA measurement is placed in the context of the most recent τn results and cold neutron A0 experiments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A description is given of experimental work on the damping of a second order electron plasma wave echo due to velocity space diffusion in a low temperature magnetoplasma. Sufficient precision was obtained to verify the theoretically predicted cubic rather than quadratic or quartic dependence of the damping on exciter separation. Compared to the damping predicted for Coulomb collisions in a thermal plasma in an infinite magnetic field, the magnitude of the damping was approximately as predicted, while the velocity dependence of the damping was weaker than predicted. The discrepancy is consistent with the actual non-Maxwellian electron distribution of the plasma.

In conjunction with the damping work, echo amplitude saturation was measured as a function of the velocity of the electrons contributing to the echo. Good agreement was obtained with the predicted J1 Bessel function amplitude dependence, as well as a demonstration that saturation did not influence the damping results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[ES]Los objetivos del siguiente trabajo consisten en analizar e optimizar el proceso del torneado en duro del acero ASP-23 indagando de especial manera en la realización de diferentes soluciones para brochas. En este caso, este proyecto nace de la importancia de reducir así como los costes económicos y los costes temporales de fabricación de elementos basados en el acero ASP-23 mediante el torneado en duro; proceso de mecanizado, cuya importancia cada vez es mayor como en las industrias de automoción o aeronáutica. El desarrollo del proyecto es fruto de la necesidad de EKIN S. Coop, uno de los líderes en los procesos de máquina-herramienta de alta precisión para el brochado, de desarrollar un proceso de mecanizado más eficaz de las brochas que produce. Así en el aula máquina-herramienta (ETSIB) se han intentado demostrar los beneficios que tiene el torneado en duro en el mecanizado del ASP-23. Hoy en día, con el rápido desarrollo de nuevos materiales, los procesos de fabricación se están haciendo cada vez más complejos, por la amplia variedad de maquinas con las que se realizan los procesos, por la variedad de geometría/material de las herramientas empleadas, por las propiedades del material de la pieza a mecanizar, por los parámetros de corte tan variados con los que podemos implementar el proceso (profundidad de corte, velocidad, alimentación...) y por la diversidad de elementos de sujeción utilizados. Además debemos ser conscientes de que tal variedad implica grandes magnitudes de deformaciones, velocidades y temperaturas. He aquí la justificación y el gran interés en el proyecto a realizar. Por ello, en este proyecto intentamos dar un pequeño paso en el conocimiento del proceso del torneado en duro de aceros con poca maquinabilidad, siendo conscientes de la amplia variedad y dificultad del avance en la ingeniería de fabricación y del mucho trabajo que queda por hacer.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis I present a study of W pair production in e+e- annihilation using fully hadronic W+W- events. Data collected by the L3 detector at LEP in 1996-1998, at collision center-of-mass energies between 161 and 189 GeV, was used in my analysis.

Analysis of the total and differential W+W- cross sections with the resulting sample of 1,932 W+W- → qqqq event candidates allowed me to make precision measurements of a number of properties of the W boson. I combined my measurements with those using other W+W- final states to obtain stringent constraints on the W boson's couplings to fermions, other gauge bosons, and scalar Higgs field by measuring the total e+e- → W+W- cross section and its energy dependence

σ(e+e- → W+W-) =

{2.68+0.98-0.67(stat.)± 0.14(syst.) pb, √s = 161.34 GeV

{12.04+1.38-1.29(stat.)± 0.23(syst.) pb, √s = 172.13 GeV

{16.45 ± 0.67(stat.) ± 0.26(syst.) pb, √s = 182.68 GeV

{16.28 ± 0.38(stat.) ± 0.26(syst.) pb, √s = 188.64 GeV

the fraction of W bosons decaying into hadrons

BR(W →qq') = 68.72 ± 0.69(stat.) ± 0.38(syst.) %,

invisible non-SM width of the W boson

ΓinvisibleW less than MeV at 95% C.L.,

the mass of the W boson

MW = 80.44 ± 0.08(stat.)± 0.06(syst.) GeV,

the total width of the W boson

ΓW = 2.18 ± 0.20(stat.)± 0.11(syst.) GeV,

the anomalous triple gauge boson couplings of the W

ΔgZ1 = 0.16+0.13-0.20(stat.) ± 0.11(syst.)

Δkγ = 0.26+0.24-0.33(stat.) ± 0.16(syst.)

λγ = 0.18+0.13-0.20(stat.) ± 0.11(syst.)

No significant deviations from Standard Model predictions were found in any of the measurements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[ES]El trabajo fin grado desarrollado en este documento consiste en la realización de una interfaz gráfica que permita analizar la precisión, en la medida de armónicos e interarmónicos de señales eléctricas de tensión y corriente, de diferentes técnicas que buscan la sincronización de la frecuencia de muestreo con la frecuencia fundamental. Se estudian diferentes técnicas de estimación de la frecuencia fundamental y diferentes técnicas de remuestreo aplicadas a señales analíticas de las que se conocen su frecuencia fundamental y su contenido armónico. Estas técnicas de procesado tienen como objetivo mejorar en la medida del contenido armónico haciendo disminuir, mediante la sincronización de la frecuencia de muestreo, el error que se comete debido a la dispersión espectral provocada por el enventanado de las señales.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

For more than 55 years, data have been collected on the population of pike Esox lucius in Windermere, first by the Freshwater Biological Association (FBA) and, since 1989, by the Institute of Freshwater Ecology (IFE) of the NERC Centre for Ecology and Hydrology. The aim of this article is to explore some methodological and statistical issues associated with the precision of pike gill net catches and catch-per-unit-effort (CPUE) data, further to those examined by Bagenal (1972) and especially in the light of the current deployment within the Windermere long-term sampling programme. Specifically, consideration is given to the precision of catch estimates from gill netting, including the effects of sampling different locations, the effectiveness of sampling for distinguishing between years, and the effects of changing fishing effort.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[ES]El proyecto presentado a continuación muestra la elaboración de un core para ser embebido dentro de las denominadas FPGAs (Field Programmable Gate Array), cuya finalidad es la creación de una referencia temporal, en arquitectura de 64bits, gracias a un módulo GPS (Global Positioning System), lo más cercana posible al orden de las decenas de nano-segundos, para poder ser insertado en un equipo PTP-Master (Precision Time Protocol - Master) (IEEE (Institute of Electrical and Electronics Engineers) - 1588), a bajo coste y con calidad comparable a la de los dispositivos Grand Master.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Observations of solar energetic particles (SEPs) from 22 solar flares in the 1977-1982 time period are reported. The observations were made by the Cosmic Ray Subsystem on board the Voyager 1 and 2 spacecraft. SEP abundances have been obtained for all elements with 3 ≤ Z ≤ 30 except Li, Be, B. F, Sc, V, Co and Cu. for which upper limits have been obtained. Statistically meaningful abundances of several rare elements (e.g., P, Cl, K, Ti, Mn) have been determined for the first time, and the average abundances of the more abundant elements have been determined with improved precision, typically a factor of three better than the best previous determinations.

Previously reported results concerning the dependence of the fractionation of SEPs relative to photosphere on first ionization potential (FIP) have been confirmed and amplified upon with the new data. The monotonic Z-dependence of the variation between flares noted by earlier studies was found to be interpretable as a fractionation, produced by acceleration of the particles from the corona and their propagation through interplanetary space, which is ordered by the ionic charge-to-mass ratio Q/ M of the species making up the SEPs. It was found that Q/M is the primary organizing parameter of acceleration and propagation effects in SEPs, as evidenced by the dependence on Q/M of time, spatial and energy dependence within flares and of the abundance variability from flare to flare.

An unfractionated coronal composition was derived by applying a simple Q/M fractionation correction to the observed average SEP composition, to simultaneously correct for all Q/M-correlated acceleration/propagation fractionation of SEPs. The resulting coronal composition agrees well with current XUV/X-ray spectroscopic measurements of coronal composition but is of much higher precision and is available for a much larger set of elements. Compared to spectroscopic photospheric abundances, the SEP-derived corona appears depleted in C and somewhat enriched in Cr (and possibly Ca and Ti).

An unfractionated photospheric composition was derived by applying a simple FIP fractionation correction to the derived coronal composition, to correct for the FIP-associated fractionation of the corona during its formation from photospheric material. The resulting composition agrees well with the photospheric abundance tabulation of Grevesse (1984) except for an at least 50% lower abundance of C and a significantly greater abundance of Cr and possibly Ti. The results support the Grevesse photospheric Fe abundance, about 50% higher than meteoritic and earlier solar values. The SEP-derived photospheric composition is not generally of higher precision than the available spectroscopic data, but it relies on fewer physical parameters and is available for some elements (C, N, Ne, Ar) which cannot be measured spectroscopically in the photosphere.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Researchers have spent decades refining and improving their methods for fabricating smaller, finer-tuned, higher-quality nanoscale optical elements with the goal of making more sensitive and accurate measurements of the world around them using optics. Quantum optics has been a well-established tool of choice in making these increasingly sensitive measurements which have repeatedly pushed the limits on the accuracy of measurement set forth by quantum mechanics. A recent development in quantum optics has been a creative integration of robust, high-quality, and well-established macroscopic experimental systems with highly-engineerable on-chip nanoscale oscillators fabricated in cleanrooms. However, merging large systems with nanoscale oscillators often require them to have extremely high aspect-ratios, which make them extremely delicate and difficult to fabricate with an "experimentally reasonable" repeatability, yield and high quality. In this work we give an overview of our research, which focused on microscopic oscillators which are coupled with macroscopic optical cavities towards the goal of cooling them to their motional ground state in room temperature environments. The quality factor of a mechanical resonator is an important figure of merit for various sensing applications and observing quantum behavior. We demonstrated a technique for pushing the quality factor of a micromechanical resonator beyond conventional material and fabrication limits by using an optical field to stiffen and trap a particular motional mode of a nanoscale oscillator. Optical forces increase the oscillation frequency by storing most of the mechanical energy in a nearly loss-less optical potential, thereby strongly diluting the effects of material dissipation. By placing a 130 nm thick SiO2 pendulum in an optical standing wave, we achieve an increase in the pendulum center-of-mass frequency from 6.2 to 145 kHz. The corresponding quality factor increases 50-fold from its intrinsic value to a final value of Qm = 5.8(1.1) x 105, representing more than an order of magnitude improvement over the conventional limits of SiO2 for a pendulum geometry. Our technique may enable new opportunities for mechanical sensing and facilitate observations of quantum behavior in this class of mechanical systems. We then give a detailed overview of the techniques used to produce high-aspect-ratio nanostructures with applications in a wide range of quantum optics experiments. The ability to fabricate such nanodevices with high precision opens the door to a vast array of experiments which integrate macroscopic optical setups with lithographically engineered nanodevices. Coupled with atom-trapping experiments in the Kimble Lab, we use these techniques to realize a new waveguide chip designed to address ultra-cold atoms along lithographically patterned nanobeams which have large atom-photon coupling and near 4π Steradian optical access for cooling and trapping atoms. We describe a fully integrated and scalable design where cold atoms are spatially overlapped with the nanostring cavities in order to observe a resonant optical depth of d0 ≈ 0.15. The nanodevice illuminates new possibilities for integrating atoms into photonic circuits and engineering quantum states of atoms and light on a microscopic scale. We then describe our work with superconducting microwave resonators coupled to a phononic cavity towards the goal of building an integrated device for quantum-limited microwave-to-optical wavelength conversion. We give an overview of our characterizations of several types of substrates for fabricating a low-loss high-frequency electromechanical system. We describe our electromechanical system fabricated on a Si3N4 membrane which consists of a 12 GHz superconducting LC resonator coupled capacitively to the high frequency localized modes of a phononic nanobeam. Using our suspended membrane geometry we isolate our system from substrates with significant loss tangents, drastically reducing the parasitic capacitance of our superconducting circuit to ≈ 2.5$ fF. This opens up a number of possibilities in making a new class of low-loss high-frequency electromechanics with relatively large electromechanical coupling. We present our substrate studies, fabrication methods, and device characterization.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis we build a novel analysis framework to perform the direct extraction of all possible effective Higgs boson couplings to the neutral electroweak gauge bosons in the H → ZZ(*) → 4l channel also referred to as the golden channel. We use analytic expressions of the full decay differential cross sections for the H → VV' → 4l process, and the dominant irreducible standard model qq ̄ → 4l background where 4l = 2e2μ,4e,4μ. Detector effects are included through an explicit convolution of these analytic expressions with transfer functions that model the detector responses as well as acceptance and efficiency effects. Using the full set of decay observables, we construct an unbinned 8-dimensional detector level likelihood function which is con- tinuous in the effective couplings, and includes systematics. All potential anomalous couplings of HVV' where V = Z,γ are considered, allowing for general CP even/odd admixtures and any possible phases. We measure the CP-odd mixing between the tree-level HZZ coupling and higher order CP-odd couplings to be compatible with zero, and in the range [−0.40, 0.43], and the mixing between HZZ tree-level coupling and higher order CP -even coupling to be in the ranges [−0.66, −0.57] ∪ [−0.15, 1.00]; namely compatible with a standard model Higgs. We discuss the expected precision in determining the various HVV' couplings in future LHC runs. A powerful and at first glance surprising prediction of the analysis is that with 100-400 fb-1, the golden channel will be able to start probing the couplings of the Higgs boson to diphotons in the 4l channel. We discuss the implications and further optimization of the methods for the next LHC runs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is often difficult to define ‘water quality’ with any degree of precision. One approach is that suggested by Battarbee (1997) and is based on the extent to which individual lakes have changed compared with their natural ‘baseline’ status. Defining the base-line status of artificial lakes and reservoirs however, is, very difficult. In ecological terms, the definition of quality must include some consideration of their functional characteristics and the extent to which these characteristics are self-sustaining. The challenge of managing lakes in a sustainable way is particularly acute in semi-arid, Mediterranean countries. Here the quality of the water is strongly influenced by the unpredictability of the rainfall as well as year-to-year variations in the seasonal averages. Wise management requires profound knowledge of how these systems function. Thus a holistic approach must be adopted and the factors influencing the seasonal dynamics of the lakes quantified over a range of spatial and temporal scales. In this article, the authors describe some of the ways in which both long-term and short-term changes in the weather have influenced the seasonal and spatial dynamics of phytoplankton in El Gergal, a water supply reservoir situated in the south of Spain. The quality of the water stored in this reservoir is typically very good but surface blooms of algae commonly appear during warm, calm periods when the water level is low. El Gergal reservoir is managed by the Empresa Municipal de Abastecimiento y Saneamiento (EMASESA) and supplies water for domestic, commercial and industrial use to an area which includes the city of Seville and twelve of its surrounding towns (ca. 1.3 million inhabitants). El Gergal is the last of two reservoirs in a chain of four situated in the Rivera de Huelva basin, a tributary of the Guadalquivir river. It was commissioned by EMASESA in 1979 and since then the company has monitored its main limnological parameters on, at least, a monthly basis and used this information to improve the management of the reservoir. As a consequence of these intensive studies the physical, chemical and biological information acquired during this period makes the El Gergal database one of the most complete in Spain. In this article the authors focus on three ‘weather-related’ effects that have had a significant impact on the composition and distribution of phytoplankton in El Gergal: (i) the changes associated with severe droughts; (ii) the spatial variations produced by short-term changes in the weather; (iii) the impact of water transfers on the seasonal dynamics of the dinoflagellate Ceratium.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The current power grid is on the cusp of modernization due to the emergence of distributed generation and controllable loads, as well as renewable energy. On one hand, distributed and renewable generation is volatile and difficult to dispatch. On the other hand, controllable loads provide significant potential for compensating for the uncertainties. In a future grid where there are thousands or millions of controllable loads and a large portion of the generation comes from volatile sources like wind and solar, distributed control that shifts or reduces the power consumption of electric loads in a reliable and economic way would be highly valuable.

Load control needs to be conducted with network awareness. Otherwise, voltage violations and overloading of circuit devices are likely. To model these effects, network power flows and voltages have to be considered explicitly. However, the physical laws that determine power flows and voltages are nonlinear. Furthermore, while distributed generation and controllable loads are mostly located in distribution networks that are multiphase and radial, most of the power flow studies focus on single-phase networks.

This thesis focuses on distributed load control in multiphase radial distribution networks. In particular, we first study distributed load control without considering network constraints, and then consider network-aware distributed load control.

Distributed implementation of load control is the main challenge if network constraints can be ignored. In this case, we first ignore the uncertainties in renewable generation and load arrivals, and propose a distributed load control algorithm, Algorithm 1, that optimally schedules the deferrable loads to shape the net electricity demand. Deferrable loads refer to loads whose total energy consumption is fixed, but energy usage can be shifted over time in response to network conditions. Algorithm 1 is a distributed gradient decent algorithm, and empirically converges to optimal deferrable load schedules within 15 iterations.

We then extend Algorithm 1 to a real-time setup where deferrable loads arrive over time, and only imprecise predictions about future renewable generation and load are available at the time of decision making. The real-time algorithm Algorithm 2 is based on model-predictive control: Algorithm 2 uses updated predictions on renewable generation as the true values, and computes a pseudo load to simulate future deferrable load. The pseudo load consumes 0 power at the current time step, and its total energy consumption equals the expectation of future deferrable load total energy request.

Network constraints, e.g., transformer loading constraints and voltage regulation constraints, bring significant challenge to the load control problem since power flows and voltages are governed by nonlinear physical laws. Remarkably, distribution networks are usually multiphase and radial. Two approaches are explored to overcome this challenge: one based on convex relaxation and the other that seeks a locally optimal load schedule.

To explore the convex relaxation approach, a novel but equivalent power flow model, the branch flow model, is developed, and a semidefinite programming relaxation, called BFM-SDP, is obtained using the branch flow model. BFM-SDP is mathematically equivalent to a standard convex relaxation proposed in the literature, but numerically is much more stable. Empirical studies show that BFM-SDP is numerically exact for the IEEE 13-, 34-, 37-, 123-bus networks and a real-world 2065-bus network, while the standard convex relaxation is numerically exact for only two of these networks.

Theoretical guarantees on the exactness of convex relaxations are provided for two types of networks: single-phase radial alternative-current (AC) networks, and single-phase mesh direct-current (DC) networks. In particular, for single-phase radial AC networks, we prove that a second-order cone program (SOCP) relaxation is exact if voltage upper bounds are not binding; we also modify the optimal load control problem so that its SOCP relaxation is always exact. For single-phase mesh DC networks, we prove that an SOCP relaxation is exact if 1) voltage upper bounds are not binding, or 2) voltage upper bounds are uniform and power injection lower bounds are strictly negative; we also modify the optimal load control problem so that its SOCP relaxation is always exact.

To seek a locally optimal load schedule, a distributed gradient-decent algorithm, Algorithm 9, is proposed. The suboptimality gap of the algorithm is rigorously characterized and close to 0 for practical networks. Furthermore, unlike the convex relaxation approach, Algorithm 9 ensures a feasible solution. The gradients used in Algorithm 9 are estimated based on a linear approximation of the power flow, which is derived with the following assumptions: 1) line losses are negligible; and 2) voltages are reasonably balanced. Both assumptions are satisfied in practical distribution networks. Empirical results show that Algorithm 9 obtains 70+ times speed up over the convex relaxation approach, at the cost of a suboptimality within numerical precision.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

These studies explore how, where, and when representations of variables critical to decision-making are represented in the brain. In order to produce a decision, humans must first determine the relevant stimuli, actions, and possible outcomes before applying an algorithm that will select an action from those available. When choosing amongst alternative stimuli, the framework of value-based decision-making proposes that values are assigned to the stimuli and that these values are then compared in an abstract “value space” in order to produce a decision. Despite much progress, in particular regarding the pinpointing of ventromedial prefrontal cortex (vmPFC) as a region that encodes the value, many basic questions remain. In Chapter 2, I show that distributed BOLD signaling in vmPFC represents the value of stimuli under consideration in a manner that is independent of the type of stimulus it is. Thus the open question of whether value is represented in abstraction, a key tenet of value-based decision-making, is confirmed. However, I also show that stimulus-dependent value representations are also present in the brain during decision-making and suggest a potential neural pathway for stimulus-to-value transformations that integrates these two results.

More broadly speaking, there is both neural and behavioral evidence that two distinct control systems are at work during action selection. These two systems compose the “goal-directed system”, which selects actions based on an internal model of the environment, and the “habitual” system, which generates responses based on antecedent stimuli only. Computational characterizations of these two systems imply that they have different informational requirements in terms of input stimuli, actions, and possible outcomes. Associative learning theory predicts that the habitual system should utilize stimulus and action information only, while goal-directed behavior requires that outcomes as well as stimuli and actions be processed. In Chapter 3, I test whether areas of the brain hypothesized to be involved in habitual versus goal-directed control represent the corresponding theorized variables.

The question of whether one or both of these neural systems drives Pavlovian conditioning is less well-studied. Chapter 4 describes an experiment in which subjects were scanned while engaged in a Pavlovian task with a simple non-trivial structure. After comparing a variety of model-based and model-free learning algorithms (thought to underpin goal-directed and habitual decision-making, respectively), it was found that subjects’ reaction times were better explained by a model-based system. In addition, neural signaling of precision, a variable based on a representation of a world model, was found in the amygdala. These data indicate that the influence of model-based representations of the environment can extend even to the most basic learning processes.

Knowledge of the state of hidden variables in an environment is required for optimal inference regarding the abstract decision structure of a given environment and therefore can be crucial to decision-making in a wide range of situations. Inferring the state of an abstract variable requires the generation and manipulation of an internal representation of beliefs over the values of the hidden variable. In Chapter 5, I describe behavioral and neural results regarding the learning strategies employed by human subjects in a hierarchical state-estimation task. In particular, a comprehensive model fit and comparison process pointed to the use of "belief thresholding". This implies that subjects tended to eliminate low-probability hypotheses regarding the state of the environment from their internal model and ceased to update the corresponding variables. Thus, in concert with incremental Bayesian learning, humans explicitly manipulate their internal model of the generative process during hierarchical inference consistent with a serial hypothesis testing strategy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We develop a method for performing one-loop calculations in finite systems that is based on using the WKB approximation for the high energy states. This approximation allows us to absorb all the counterterms analytically and thereby avoids the need for extreme numerical precision that was required by previous methods. In addition, the local approximation makes this method well suited for self-consistent calculations. We then discuss the application of relativistic mean field methods to the atomic nucleus. Self-consistent, one loop calculations in the Walecka model are performed and the role of the vacuum in this model is analyzed. This model predicts that vacuum polarization effects are responsible for up to five percent of the local nucleon density. Within this framework the possible role of strangeness degrees of freedom is studied. We find that strangeness polarization can increase the kaon-nucleus scattering cross section by ten percent. By introducing a cutoff into the model, the dependence of the model on short-distance physics, where its validity is doubtful, is calculated. The model is very sensitive to cutoffs around one GeV.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

在星间半导体激光通信系统中,如何检测发射光束波面的质量是个较难处理的问题,为了较好地解决这一问题,在简单介绍白光横向双剪切干涉仪的基础上,报道了用此干涉仪对近衍射极限半导体激光光束波面的检测,在此基础上推导出计算远场发散度的公式。实验测得近场光束的波高差为0.2A,通过夫朗和费衍射求得光束的发散度仅为64.8μrad,这表明光束接近光学衍射极限。同时,表明双剪切干涉仪灵敏度高、实用性好。