988 resultados para Sub-half-wavelength
Resumo:
By sensitizing with 514 nm green light, 488 nm blue light and 390 nm ultraviolet light, respectively, recording with 633 nm red light, effect of wavelength of sensitizing light on holographic storage properties in LiNbO3:Fe:Ni crystal is investigated in detail. It is shown that by shortening the wavelength of sensitizing light gradually, nonvolatile holographic recording properties of oxidized LiNbO3:Fe:Ni crystal is optimized gradually, 390 nm ultraviolet light is the best as the sensitizing light. Considering the absorption of sensitizing light, to obtain the best performance in two-center holographic recording we must choose a sensitizing wavelength that is long enough to prevent unwanted absorptions (band-to-band, etc.) and short enough to result in efficient sensitization from the deep traps. So in practice a trade-off is always needed. Explanation is presented theoretically. (c) 2005 Elsevier GmbH. All rights reserved.
Resumo:
We describe the design, fabrication, and excellent performance of an optimized deep-etched high-density fused-silica transmission grating for use in dense wavelength division multiplexing (DWDM) systems. The fabricated optimized transmission grating exhibits an efficiency of 87.1% at a wavelength of 1550 nm. Inductively coupled plasma-etching technology was used to fabricate the grating. The deep-etched high-density fused-silica transmission grating is suitable for use in a DWDM system because of its high efficiency, low polarization-dependent loss, parallel demultiplexing, and stable optical performance. The fabricated deep-etched high-density fused-silica transmission gratings should play an important role in DWDM systems. (c) 2006 Optical Society of America.
Resumo:
The first part of this thesis combines Bolocam observations of the thermal Sunyaev-Zel’dovich (SZ) effect at 140 GHz with X-ray observations from Chandra, strong lensing data from the Hubble Space Telescope (HST), and weak lensing data from HST and Subaru to constrain parametric models for the distribution of dark and baryonic matter in a sample of six massive, dynamically relaxed galaxy clusters. For five of the six clusters, the full multiwavelength dataset is well described by a relatively simple model that assumes spherical symmetry, hydrostatic equilibrium, and entirely thermal pressure support. The multiwavelength analysis yields considerably better constraints on the total mass and concentration compared to analysis of any one dataset individually. The subsample of five galaxy clusters is used to place an upper limit on the fraction of pressure support in the intracluster medium (ICM) due to nonthermal processes, such as turbulent and bulk flow of the gas. We constrain the nonthermal pressure fraction at r500c to be less than 0.11 at 95% confidence, where r500c refers to radius at which the average enclosed density is 500 times the critical density of the Universe. This is in tension with state-of-the-art hydrodynamical simulations, which predict a nonthermal pressure fraction of approximately 0.25 at r500c for the clusters in this sample.
The second part of this thesis focuses on the characterization of the Multiwavelength Sub/millimeter Inductance Camera (MUSIC), a photometric imaging camera that was commissioned at the Caltech Submillimeter Observatory (CSO) in 2012. MUSIC is designed to have a 14 arcminute, diffraction-limited field of view populated with 576 spatial pixels that are simultaneously sensitive to four bands at 150, 220, 290, and 350 GHz. It is well-suited for studies of dusty star forming galaxies, galaxy clusters via the SZ Effect, and galactic star formation. MUSIC employs a number of novel detector technologies: broadband phased-arrays of slot dipole antennas for beam formation, on-chip lumped element filters for band definition, and Microwave Kinetic Inductance Detectors (MKIDs) for transduction of incoming light to electric signal. MKIDs are superconducting micro-resonators coupled to a feedline. Incoming light breaks apart Cooper pairs in the superconductor, causing a change in the quality factor and frequency of the resonator. This is read out as amplitude and phase modulation of a microwave probe signal centered on the resonant frequency. By tuning each resonator to a slightly different frequency and sending out a superposition of probe signals, hundreds of detectors can be read out on a single feedline. This natural capability for large scale, frequency domain multiplexing combined with relatively simple fabrication makes MKIDs a promising low temperature detector for future kilopixel sub/millimeter instruments. There is also considerable interest in using MKIDs for optical through near-infrared spectrophotometry due to their fast microsecond response time and modest energy resolution. In order to optimize the MKID design to obtain suitable performance for any particular application, it is critical to have a well-understood physical model for the detectors and the sources of noise to which they are susceptible. MUSIC has collected many hours of on-sky data with over 1000 MKIDs. This work studies the performance of the detectors in the context of one such physical model. Chapter 2 describes the theoretical model for the responsivity and noise of MKIDs. Chapter 3 outlines the set of measurements used to calibrate this model for the MUSIC detectors. Chapter 4 presents the resulting estimates of the spectral response, optical efficiency, and on-sky loading. The measured detector response to Uranus is compared to the calibrated model prediction in order to determine how well the model describes the propagation of signal through the full instrument. Chapter 5 examines the noise present in the detector timestreams during recent science observations. Noise due to fluctuations in atmospheric emission dominate at long timescales (less than 0.5 Hz). Fluctuations in the amplitude and phase of the microwave probe signal due to the readout electronics contribute significant 1/f and drift-type noise at shorter timescales. The atmospheric noise is removed by creating a template for the fluctuations in atmospheric emission from weighted averages of the detector timestreams. The electronics noise is removed by using probe signals centered off-resonance to construct templates for the amplitude and phase fluctuations. The algorithms that perform the atmospheric and electronic noise removal are described. After removal, we find good agreement between the observed residual noise and our expectation for intrinsic detector noise over a significant fraction of the signal bandwidth.
Resumo:
The assembly history of massive galaxies is one of the most important aspects of galaxy formation and evolution. Although we have a broad idea of what physical processes govern the early phases of galaxy evolution, there are still many open questions. In this thesis I demonstrate the crucial role that spectroscopy can play in a physical understanding of galaxy evolution. I present deep near-infrared spectroscopy for a sample of high-redshift galaxies, from which I derive important physical properties and their evolution with cosmic time. I take advantage of the recent arrival of efficient near-infrared detectors to target the rest-frame optical spectra of z > 1 galaxies, from which many physical quantities can be derived. After illustrating the applications of near-infrared deep spectroscopy with a study of star-forming galaxies, I focus on the evolution of massive quiescent systems.
Most of this thesis is based on two samples collected at the W. M. Keck Observatory that represent a significant step forward in the spectroscopic study of z > 1 quiescent galaxies. All previous spectroscopic samples at this redshift were either limited to a few objects, or much shallower in terms of depth. Our first sample is composed of 56 quiescent galaxies at 1 < z < 1.6 collected using the upgraded red arm of the Low Resolution Imaging Spectrometer (LRIS). The second consists of 24 deep spectra of 1.5 < z < 2.5 quiescent objects observed with the Multi-Object Spectrometer For Infra-Red Exploration (MOSFIRE). Together, these spectra span the critical epoch 1 < z < 2.5, where most of the red sequence is formed, and where the sizes of quiescent systems are observed to increase significantly.
We measure stellar velocity dispersions and dynamical masses for the largest number of z > 1 quiescent galaxies to date. By assuming that the velocity dispersion of a massive galaxy does not change throughout its lifetime, as suggested by theoretical studies, we match galaxies in the local universe with their high-redshift progenitors. This allows us to derive the physical growth in mass and size experienced by individual systems, which represents a substantial advance over photometric inferences based on the overall galaxy population. We find a significant physical growth among quiescent galaxies over 0 < z < 2.5 and, by comparing the slope of growth in the mass-size plane dlogRe/dlogM∗ with the results of numerical simulations, we can constrain the physical process responsible for the evolution. Our results show that the slope of growth becomes steeper at higher redshifts, yet is broadly consistent with minor mergers being the main process by which individual objects evolve in mass and size.
By fitting stellar population models to the observed spectroscopy and photometry we derive reliable ages and other stellar population properties. We show that the addition of the spectroscopic data helps break the degeneracy between age and dust extinction, and yields significantly more robust results compared to fitting models to the photometry alone. We detect a clear relation between size and age, where larger galaxies are younger. Therefore, over time the average size of the quiescent population will increase because of the contribution of large galaxies recently arrived to the red sequence. This effect, called progenitor bias, is different from the physical size growth discussed above, but represents another contribution to the observed difference between the typical sizes of low- and high-redshift quiescent galaxies. By reconstructing the evolution of the red sequence starting at z ∼ 1.25 and using our stellar population histories to infer the past behavior to z ∼ 2, we demonstrate that progenitor bias accounts for only half of the observed growth of the population. The remaining size evolution must be due to physical growth of individual systems, in agreement with our dynamical study.
Finally, we use the stellar population properties to explore the earliest periods which led to the formation of massive quiescent galaxies. We find tentative evidence for two channels of star formation quenching, which suggests the existence of two independent physical mechanisms. We also detect a mass downsizing, where more massive galaxies form at higher redshift, and then evolve passively. By analyzing in depth the star formation history of the brightest object at z > 2 in our sample, we are able to put constraints on the quenching timescale and on the properties of its progenitor.
A consistent picture emerges from our analyses: massive galaxies form at very early epochs, are quenched on short timescales, and then evolve passively. The evolution is passive in the sense that no new stars are formed, but significant mass and size growth is achieved by accreting smaller, gas-poor systems. At the same time the population of quiescent galaxies grows in number due to the quenching of larger star-forming galaxies. This picture is in agreement with other observational studies, such as measurements of the merger rate and analyses of galaxy evolution at fixed number density.
Resumo:
We describe high-efficiency diffraction gratings fabricated in fused silica at the wavelength of 632.8 nm by rigorous coupled-wave analysis (RCWA). High-density holographic gratings, if the groove density falls within the range of 1575-1630 lines/mm and the groove depth within the range of 1.1-1.3 microns, can realize high diffraction efficiencies at the wavelength of 632.8 nm, e.g., the first Bragg diffraction efficiency can theoretically achieve more than 93% both in TE- and TM-polarized incidences, which greatly reduces the polarization-dependent losses. Note that with different groove profiles further optimized, the maximum efficiency of more than 99.69% can be achieved for TM-polarized incidence, or 97.81% for TE-polarized incidence.
Resumo:
Kilometer scale interferometers for the detection of gravitational waves are currently under construction by the LIGO (Laser Interferometer Gravitational-wave Observatory) and VIRGO projects. These interferometers will consist of two Fabry-Perot cavities illuminated by a laser beam which is split in half by a beam splitter. A recycling mirror between the laser and the beam splitter will reflect the light returning from the beam splitter towards the laser back into the interferometer. The positions of the optical components in these interferometers must be controlled to a small fraction of a wavelength of the laser light. Schemes to extract signals necessary to control these optical components have been developed and demonstrated on the tabletop. In the large scale gravitational wave detectors the optical components must be suspended from vibration isolation platforms to achieve the necessary isolation from seismic motion. These suspended components present a new class of problems in controlling the interferometer, but also provide more exacting test of interferometer signal and noise models.
This thesis discusses the first operation of a suspended-mass Fabry-Perot-Michelson interferometer, in which signals carried by the optically recombined beams are used to detect and control all important mirror displacements. This interferometer uses an optical configuration and signal extraction scheme that is planned for the full scale LIGO interferometers with the simplification of the removal of the recycling mirror. A theoretical analysis of the performance that is expected from such an interferometer is presented and the experimental results are shown to be in generally good agreement.
Resumo:
Theoretical and experimental studies were made on two classes of buoyant jet problems, namely:
1) an inclined, round buoyant yet in a stagnant environment with linear density-stratification;
2) a round buoyant jet in a uniform cross stream of homogenous density.
Using the integral technique of analysis, assuming similarity, predictions can be made for jet trajectory, widths, and dilution ratios, in a density-stratified or flowing environment. Such information is of great importance in the design of disposal systems for sewage effluent into the ocean or waste gases into the atmosphere.
The present study of a buoyant jet in a stagnant environment has extended the Morton type of analysis to cover the effect of the initial angle of discharge. Numerical solutions have been presented for a range of initial conditions. Laboratory experiments were conducted for photographic observations of the trajectories of dyed jets. In general the observed jet forms agreed well with the calculated trajectories and nominal half widths when the value of the entrainment coefficient was taken to be α = 0.082, as previously suggested by Morton.
The problem of a buoyant jet in a uniform cross stream was analyzed by assuming an entrainment mechanism based upon the vector difference between the characteristic jet velocity and the ambient velocity. The effect of the unbalanced pressure field on the sides of the jet flow was approximated by a gross drag term. Laboratory flume experiments with sinking jets which are directly analogous to buoyant jets were performed. Salt solutions were injected into fresh water at the free surface in a flume. The jet trajectories, dilution ratios and jet half widths were determined by conductivity measurements. The entrainment coefficient, α, and drag coefficient, Cd, were found from the observed jet trajectories and dilution ratios. In the ten cases studied where jet Froude number ranged from 10 to 80 and velocity ratio (jet: current) K from 4 to 16, α varied from 0.4 to 0.5 and Cd from 1.7 to 0.1. The jet mixing motion for distance within 250D was found to be dominated by the self-generated turbulence, rather than the free-stream turbulence. Similarity of concentration profiles has also been discussed.
Resumo:
Three wavelengths of red, green and blue of recording beams are systemically tested for the UV-assistant recording and optical fixing of holograms in a strongly oxidized Ce:Cu:LiNbO3 crystal. Three different photorefractive phenomena are observed. It is shown that the green beams will optimally generate a critical strong nonvolatile hologram with quick sensitivity and the optimal switching technique could be jointly used to obtain a nearly 100% high diffraction. Theoretical verification is given, and a prescription on the doping densities and on the oxidation/reduction states of the material to match a defined recording wavelength for high diffraction is suggested.
Resumo:
Within the wavelength range from 351 to 799 nm, the different reductions of nucleation field induced by the focused continuous laser irradiation are achieved in the 5 mol % MgO-doped congruent LiNbO3 crystals. The reduction proportion increases exponentially with decreasing irradiation wavelength and decreases exponentially with increasing irradiation wavelength. At one given wavelength, the reduction proportion increases exponentially with increasing irradiation intensity. An assumption is proposed that the reduction of nucleation field is directly related to the defect structure of crystal lattice generated by the complex coaction of incident irradiation field and external electric field. (c) 2007 American Institute of Physics.
Resumo:
Since the discovery in 1962 of laser action in semiconductor diodes made from GaAs, the study of spontaneous and stimulated light emission from semiconductors has become an exciting new field of semiconductor physics and quantum electronics combined. Included in the limited number of direct-gap semiconductor materials suitable for laser action are the members of the lead salt family, i.e . PbS, PbSe and PbTe. The material used for the experiments described herein is PbTe . The semiconductor PbTe is a narrow band- gap material (Eg = 0.19 electron volt at a temperature of 4.2°K). Therefore, the radiative recombination of electron-hole pairs between the conduction and valence bands produces photons whose wavelength is in the infrared (λ ≈ 6.5 microns in air).
The p-n junction diode is a convenient device in which the spontaneous and stimulated emission of light can be achieved via current flow in the forward-bias direction. Consequently, the experimental devices consist of a group of PbTe p-n junction diodes made from p –type single crystal bulk material. The p - n junctions were formed by an n-type vapor- phase diffusion perpendicular to the (100) plane, with a junction depth of approximately 75 microns. Opposite ends of the diode structure were cleaved to give parallel reflectors, thereby forming the Fabry-Perot cavity needed for a laser oscillator. Since the emission of light originates from the recombination of injected current carriers, the nature of the radiation depends on the injection mechanism.
The total intensity of the light emitted from the PbTe diodes was observed over a current range of three to four orders of magnitude. At the low current levels, the light intensity data were correlated with data obtained on the electrical characteristics of the diodes. In the low current region (region A), the light intensity, current-voltage and capacitance-voltage data are consistent with the model for photon-assisted tunneling. As the current is increased, the light intensity data indicate the occurrence of a change in the current injection mechanism from photon-assisted tunneling (region A) to thermionic emission (region B). With the further increase of the injection level, the photon-field due to light emission in the diode builds up to the point where stimulated emission (oscillation) occurs. The threshold current at which oscillation begins marks the beginning of a region (region C) where the total light intensity increases very rapidly with the increase in current. This rapid increase in intensity is accompanied by an increase in the number of narrow-band oscillating modes. As the photon density in the cavity continues to increase with the injection level, the intensity gradually enters a region of linear dependence on current (region D), i.e. a region of constant (differential) quantum efficiency.
Data obtained from measurements of the stimulated-mode light-intensity profile and the far-field diffraction pattern (both in the direction perpendicular to the junction-plane) indicate that the active region of high gain (i.e. the region where a population inversion exists) extends to approximately a diffusion length on both sides of the junction. The data also indicate that the confinement of the oscillating modes within the diode cavity is due to a variation in the real part of the dielectric constant, caused by the gain in the medium. A value of τ ≈ 10-9 second for the minority- carrier recombination lifetime (at a diode temperature of 20.4°K) is obtained from the above measurements. This value for τ is consistent with other data obtained independently for PbTe crystals.
Data on the threshold current for stimulated emission (for a diode temperature of 20. 4°K) as a function of the reciprocal cavity length were obtained. These data yield a value of J’th = (400 ± 80) amp/cm2 for the threshold current in the limit of an infinitely long diode-cavity. A value of α = (30 ± 15) cm-1 is obtained for the total (bulk) cavity loss constant, in general agreement with independent measurements of free- carrier absorption in PbTe. In addition, the data provide a value of ns ≈ 10% for the internal spontaneous quantum efficiency. The above value for ns yields values of tb ≈ τ ≈ 10-9 second and ts ≈ 10-8 second for the nonradiative and the spontaneous (radiative) lifetimes, respectively.
The external quantum efficiency (nd) for stimulated emission from diode J-2 (at 20.4° K) was calculated by using the total light intensity vs. diode current data, plus accepted values for the material parameters of the mercury- doped germanium detector used for the measurements. The resulting value is nd ≈ 10%-20% for emission from both ends of the cavity. The corresponding radiative power output (at λ = 6.5 micron) is 120-240 milliwatts for a diode current of 6 amps.
Resumo:
The present work deals with the problem of the interaction of the electromagnetic radiation with a statistical distribution of nonmagnetic dielectric particles immersed in an infinite homogeneous isotropic, non-magnetic medium. The wavelength of the incident radiation can be less, equal or greater than the linear dimension of a particle. The distance between any two particles is several wavelengths. A single particle in the absence of the others is assumed to scatter like a Rayleigh-Gans particle, i.e. interaction between the volume elements (self-interaction) is neglected. The interaction of the particles is taken into account (multiple scattering) and conditions are set up for the case of a lossless medium which guarantee that the multiple scattering contribution is more important than the self-interaction one. These conditions relate the wavelength λ and the linear dimensions of a particle a and of the region occupied by the particles D. It is found that for constant λ/a, D is proportional to λ and that |Δχ|, where Δχ is the difference in the dielectric susceptibilities between particle and medium, has to lie within a certain range.
The total scattering field is obtained as a series the several terms of which represent the corresponding multiple scattering orders. The first term is a single scattering term. The ensemble average of the total scattering intensity is then obtained as a series which does not involve terms due to products between terms of different orders. Thus the waves corresponding to different orders are independent and their Stokes parameters add.
The second and third order intensity terms are explicitly computed. The method used suggests a general approach for computing any order. It is found that in general the first order scattering intensity pattern (or phase function) peaks in the forward direction Θ = 0. The second order tends to smooth out the pattern giving a maximum in the Θ = π/2 direction and minima in the Θ = 0 , Θ = π directions. This ceases to be true if ka (where k = 2π/λ) becomes large (> 20). For large ka the forward direction is further enhanced. Similar features are expected from the higher orders even though the critical value of ka may increase with the order.
The first order polarization of the scattered wave is determined. The ensemble average of the Stokes parameters of the scattered wave is explicitly computed for the second order. A similar method can be applied for any order. It is found that the polarization of the scattered wave depends on the polarization of the incident wave. If the latter is elliptically polarized then the first order scattered wave is elliptically polarized, but in the Θ = π/2 direction is linearly polarized. If the incident wave is circularly polarized the first order scattered wave is elliptically polarized except for the directions Θ = π/2 (linearly polarized) and Θ = 0, π (circularly polarized). The handedness of the Θ = 0 wave is the same as that of the incident whereas the handedness of the Θ = π wave is opposite. If the incident wave is linearly polarized the first order scattered wave is also linearly polarized. The second order makes the total scattered wave to be elliptically polarized for any Θ no matter what the incident wave is. However, the handedness of the total scattered wave is not altered by the second order. Higher orders have similar effects as the second order.
If the medium is lossy the general approach employed for the lossless case is still valid. Only the algebra increases in complexity. It is found that the results of the lossless case are insensitive in the first order of kimD where kim = imaginary part of the wave vector k and D a linear characteristic dimension of the region occupied by the particles. Thus moderately extended regions and small losses make (kimD)2 ≪ 1 and the lossy character of the medium does not alter the results of the lossless case. In general the presence of the losses tends to reduce the forward scattering.
Resumo:
Optical Coherence Tomography(OCT) is a popular, rapidly growing imaging technique with an increasing number of bio-medical applications due to its noninvasive nature. However, there are three major challenges in understanding and improving an OCT system: (1) Obtaining an OCT image is not easy. It either takes a real medical experiment or requires days of computer simulation. Without much data, it is difficult to study the physical processes underlying OCT imaging of different objects simply because there aren't many imaged objects. (2) Interpretation of an OCT image is also hard. This challenge is more profound than it appears. For instance, it would require a trained expert to tell from an OCT image of human skin whether there is a lesion or not. This is expensive in its own right, but even the expert cannot be sure about the exact size of the lesion or the width of the various skin layers. The take-away message is that analyzing an OCT image even from a high level would usually require a trained expert, and pixel-level interpretation is simply unrealistic. The reason is simple: we have OCT images but not their underlying ground-truth structure, so there is nothing to learn from. (3) The imaging depth of OCT is very limited (millimeter or sub-millimeter on human tissues). While OCT utilizes infrared light for illumination to stay noninvasive, the downside of this is that photons at such long wavelengths can only penetrate a limited depth into the tissue before getting back-scattered. To image a particular region of a tissue, photons first need to reach that region. As a result, OCT signals from deeper regions of the tissue are both weak (since few photons reached there) and distorted (due to multiple scatterings of the contributing photons). This fact alone makes OCT images very hard to interpret.
This thesis addresses the above challenges by successfully developing an advanced Monte Carlo simulation platform which is 10000 times faster than the state-of-the-art simulator in the literature, bringing down the simulation time from 360 hours to a single minute. This powerful simulation tool not only enables us to efficiently generate as many OCT images of objects with arbitrary structure and shape as we want on a common desktop computer, but it also provides us the underlying ground-truth of the simulated images at the same time because we dictate them at the beginning of the simulation. This is one of the key contributions of this thesis. What allows us to build such a powerful simulation tool includes a thorough understanding of the signal formation process, clever implementation of the importance sampling/photon splitting procedure, efficient use of a voxel-based mesh system in determining photon-mesh interception, and a parallel computation of different A-scans that consist a full OCT image, among other programming and mathematical tricks, which will be explained in detail later in the thesis.
Next we aim at the inverse problem: given an OCT image, predict/reconstruct its ground-truth structure on a pixel level. By solving this problem we would be able to interpret an OCT image completely and precisely without the help from a trained expert. It turns out that we can do much better. For simple structures we are able to reconstruct the ground-truth of an OCT image more than 98% correctly, and for more complicated structures (e.g., a multi-layered brain structure) we are looking at 93%. We achieved this through extensive uses of Machine Learning. The success of the Monte Carlo simulation already puts us in a great position by providing us with a great deal of data (effectively unlimited), in the form of (image, truth) pairs. Through a transformation of the high-dimensional response variable, we convert the learning task into a multi-output multi-class classification problem and a multi-output regression problem. We then build a hierarchy architecture of machine learning models (committee of experts) and train different parts of the architecture with specifically designed data sets. In prediction, an unseen OCT image first goes through a classification model to determine its structure (e.g., the number and the types of layers present in the image); then the image is handed to a regression model that is trained specifically for that particular structure to predict the length of the different layers and by doing so reconstruct the ground-truth of the image. We also demonstrate that ideas from Deep Learning can be useful to further improve the performance.
It is worth pointing out that solving the inverse problem automatically improves the imaging depth, since previously the lower half of an OCT image (i.e., greater depth) can be hardly seen but now becomes fully resolved. Interestingly, although OCT signals consisting the lower half of the image are weak, messy, and uninterpretable to human eyes, they still carry enough information which when fed into a well-trained machine learning model spits out precisely the true structure of the object being imaged. This is just another case where Artificial Intelligence (AI) outperforms human. To the best knowledge of the author, this thesis is not only a success but also the first attempt to reconstruct an OCT image at a pixel level. To even give a try on this kind of task, it would require fully annotated OCT images and a lot of them (hundreds or even thousands). This is clearly impossible without a powerful simulation tool like the one developed in this thesis.
Resumo:
It has been described that the near-field images of a high-density grating at the half self-imaging distance could be different for TE and TM polarization states. We propose that the phases of the diffraction orders play an important role in such polarization dependence. The view is verified through the coincidence of the numerical result of finite-difference time-domain method and the reconstructed results from the rigorous coupled-wave analysis. Field distributions of TE and TM polarizations are given numerically for a grating with period d = 2.3 lambda, which are verified through experiments with the scanning near-field optical microscopy technique. The concept of phase interpretation not only explains the polarization dependence at the half self-imaging distance of gratings with a physical view, but also, it could be widely used to describe the near-field diffraction of a variety of periodic diffractive optical elements whose feature size comparable to the wavelength. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
An experimental investigation of the optical properties of β–gallium oxide has been carried out, covering the wavelength range 220-2500 nm.
The refractive index and birefringence have been determined to about ± 1% accuracy over the range 270-2500 nm, by the use of a technique based on the occurrence of fringes in the transmission of a thin sample due to multiple internal reflections in the sample (ie., the "channelled spectrum" of the sample.)
The optical absorption coefficient has been determined over the range 220 - 300 nm, which range spans the fundamental absorption edge of β – Ga2O3. Two techniques were used in the absorption coefficient determination: measurement of transmission of a thin sample, and measurement of photocurrent from a Schottky barrier formed on the surface of a sample. Absorption coefficient was measured over a range from 10 to greater than 105, to an accuracy of better than ± 20%. The absorption edge was found to be strongly polarization-dependent.
Detailed analyses are presented of all three experimental techniques used. Experimentally determined values of the optical constants are presented in graphical form.
Resumo:
Sub-lethal toxicity tests, such as the scope-for-growth test, reveal simple relationships between measures of contaminant concentration and effect on respiratory and feeding physiology. Simple models are presented to investigate the potential impact of different mechanisms of chronic sub-lethal toxicity on these physiological processes. Since environmental quality is variable, even in unimpacted environments, toxicants may have differentially greater impacts in poor compared to higher quality environments. The models illustrate the implications of different degrees and mechanisms of toxicity in response to variability in the quality of the feeding environment, and variability in standard metabolic rate. The models suggest that the relationships between measured degrees of toxic stress, and the maintenance ration required to maintain zero scope-for-growth, may be highly nonlinear. In addition it may be possible to define critical levels of sub-lethal toxic effect above which no environment is of sufficient quality to permit prolonged survival.