949 resultados para work function measurements
Resumo:
We have conducted the first extensive field test of two new methods to retrieve optical properties for overhead clouds that range from patchy to overcast. The methods use measurements of zenith radiance at 673 and 870 nm wavelengths and require the presence of green vegetation in the surrounding area. The test was conducted at the Atmospheric Radiation Measurement Program Oklahoma site during September–November 2004. These methods work because at 673 nm (red) and 870 nm (near infrared (NIR)), clouds have nearly identical optical properties, while vegetated surfaces reflect quite differently. The first method, dubbed REDvsNIR, retrieves not only cloud optical depth τ but also radiative cloud fraction. Because of the 1-s time resolution of our radiance measurements, we are able for the first time to capture changes in cloud optical properties at the natural timescale of cloud evolution. We compared values of τ retrieved by REDvsNIR to those retrieved from downward shortwave fluxes and from microwave brightness temperatures. The flux method generally underestimates τ relative to the REDvsNIR method. Even for overcast but inhomogeneous clouds, differences between REDvsNIR and the flux method can be as large as 50%. In addition, REDvsNIR agreed to better than 15% with the microwave method for both overcast and broken clouds. The second method, dubbed COUPLED, retrieves τ by combining zenith radiances with fluxes. While extra information from fluxes was expected to improve retrievals, this is not always the case. In general, however, the COUPLED and REDvsNIR methods retrieve τ to within 15% of each other.
Resumo:
The budgets of seven halogenated gases (CFC-11, CFC-12, CFC-113, CFC-114, CFC-115, CCl4 and SF6) are studied by comparing measurements in polar firn air from two Arctic and three Antarctic sites, and simulation results of two numerical models: a 2-D atmospheric chemistry model and a 1-D firn diffusion model. The first one is used to calculate atmospheric concentrations from emission trends based on industrial inventories; the calculated concentration trends are used by the second one to produce depth concentration profiles in the firn. The 2-D atmospheric model is validated in the boundary layer by comparison with atmospheric station measurements, and vertically for CFC-12 by comparison with balloon and FTIR measurements. Firn air measurements provide constraints on historical atmospheric concentrations over the last century. Age distributions in the firn are discussed using a Green function approach. Finally, our results are used as input to a radiative model in order to evaluate the radiative forcing of our target gases. Multi-species and multi-site firn air studies allow to better constrain atmospheric trends. The low concentrations of all studied gases at the bottom of the firn, and their consistency with our model results confirm that their natural sources are small. Our results indicate that the emissions, sinks and trends of CFC-11, CFC-12, CFC-113, CFC-115 and SF6 are well constrained, whereas it is not the case for CFC-114 and CCl4. Significant emission-dependent changes in the lifetimes of halocarbons destroyed in the stratosphere were obtained. Those result from the time needed for their transport from the surface where they are emitted to the stratosphere where they are destroyed. Efforts should be made to update and reduce the large uncertainties on CFC lifetimes.
Resumo:
An interface between satellite retrievals and the incremental version of the four-dimensional variational assimilation scheme is developed, making full use of the information content of satellite measurements. In this paper, expressions for the function that calculates simulated observations from model states (called “observation operator”), together with its tangent linear version and adjoint, are derived. Results from our work can be used for implementing a quasi-optimal assimilation of satellite retrievals (e.g., of atmospheric trace gases) in operational meteorological centres.
Resumo:
This paper analyzes the delay performance of Enhanced relay-enabled Distributed Coordination Function (ErDCF) for wireless ad hoc networks under ideal condition and in the presence of transmission errors. Relays are nodes capable of supporting high data rates for other low data rate nodes. In ideal channel ErDCF achieves higher throughput and reduced energy consumption compared to IEEE 802.11 Distributed Coordination Function (DCF). This gain is still maintained in the presence of errors. It is also expected of relays to reduce the delay. However, the impact on the delay behavior of ErDCF under transmission errors is not known. In this work, we have presented the impact of transmission errors on delay. It turns out that under transmission errors of sufficient magnitude to increase dropped packets, packet delay is reduced. This is due to increase in the probability of failure. As a result the packet drop time increases, thus reflecting the throughput degradation.
Resumo:
Terahertz (THz) frequency radiation, 0.1 THz to 20 THz, is being investigated for biomedical imaging applications following the introduction of pulsed THz sources that produce picosecond pulses and function at room temperature. Owing to the broadband nature of the radiation, spectral and temporal information is available from radiation that has interacted with a sample; this information is exploited in the development of biomedical imaging tools and sensors. In this work, models to aid interpretation of broadband THz spectra were developed and evaluated. THz radiation lies on the boundary between regions best considered using a deterministic electromagnetic approach and those better analysed using a stochastic approach incorporating quantum mechanical effects, so two computational models to simulate the propagation of THz radiation in an absorbing medium were compared. The first was a thin film analysis and the second a stochastic Monte Carlo model. The Cole–Cole model was used to predict the variation with frequency of the physical properties of the sample and scattering was neglected. The two models were compared with measurements from a highly absorbing water-based phantom. The Monte Carlo model gave a prediction closer to experiment over 0.1 to 3 THz. Knowledge of the frequency-dependent physical properties, including the scattering characteristics, of the absorbing media is necessary. The thin film model is computationally simple to implement but is restricted by the geometry of the sample it can describe. The Monte Carlo framework, despite being initially more complex, provides greater flexibility to investigate more complicated sample geometries.
Resumo:
The precision of quasioptical null-balanced bridge instruments for transmission and reflection coefficient measurements at millimeter and submillimeter wavelengths is analyzed. A Jones matrix analysis is used to describe the amount of power reaching the detector as a function of grid angle orientation, sample transmittance/reflectance and phase delay. An analysis is performed of the errors involved in determining the complex transmission and reflection coefficient after taking into account the quantization error in the grid angle and micrometer readings, the transmission or reflection coefficient of the sample, the noise equivalent power of the detector, the source power and the post-detection bandwidth. For a system fitted with a rotating grid with resolution of 0.017 rad and a micrometer quantization error of 1 μm, a 1 mW source, and a detector with a noise equivalent power 5×10−9 W Hz−1/2, the maximum errors at an amplitude transmission or reflection coefficient of 0.5 are below ±0.025.
Resumo:
A quasi-optical de-embedding technique for characterizing waveguides is demonstrated using wideband time-resolved terahertz spectroscopy. A transfer function representation is adopted for the description of the signal in the input and output port of the waveguides. The time domain responses were discretised and the waveguide transfer function was obtained through a parametric approach in the z-domain after describing the system with an ARX as well as with a state space model. Prior to the identification procedure, filtering was performed in the wavelet domain to minimize signal distortion and the noise propagating in the ARX and subspace models. The model identification procedure requires isolation of the phase delay in the structure and therefore the time-domain signatures must be firstly aligned with respect to each other before they are compared. An initial estimate of the number of propagating modes was provided by comparing the measured phase delay in the structure with theoretical calculations that take into account the physical dimensions of the waveguide. Models derived from measurements of THz transients in a precision WR-8 waveguide adjustable short will be presented.
Resumo:
Accurate estimates for the fall speed of natural hydrometeors are vital if their evolution in clouds is to be understood quantitatively. In this study, laboratory measurements of the terminal velocity vt for a variety of ice particle models settling in viscous fluids, along with wind-tunnel and field measurements of ice particles settling in air, have been analyzed and compared to common methods of computing vt from the literature. It is observed that while these methods work well for a number of particle types, they fail for particles with open geometries, specifically those particles for which the area ratio Ar is small (Ar is defined as the area of the particle projected normal to the flow divided by the area of a circumscribing disc). In particular, the fall speeds of stellar and dendritic crystals, needles, open bullet rosettes, and low-density aggregates are all overestimated. These particle types are important in many cloud types: aggregates in particular often dominate snow precipitation at the ground and vertically pointing Doppler radar measurements. Based on the laboratory data, a simple modification to previous computational methods is proposed, based on the area ratio. This new method collapses the available drag data onto an approximately universal curve, and the resulting errors in the computed fall speeds relative to the tank data are less than 25% in all cases. Comparison with the (much more scattered) measurements of ice particles falling in air show strong support for this new method, with the area ratio bias apparently eliminated.
Resumo:
Canopy leaf area index (LAI), defined as the single-sided leaf area per unit ground area, is a quantitative measure of canopy foliar area. LAI is a controlling biophysical property of vegetation function, and quantifying LAI is thus vital for understanding energy, carbon and water fluxes between the land surface and the atmosphere. LAI is routinely available from Earth Observation (EO) instruments such as MODIS. However EO-derived estimates of LAI require validation before they are utilised by the ecosystem modelling community. Previous validation work on the MODIS collection 4 (c4) product suggested considerable error especially in forested biomes, and as a result significant modification of the MODIS LAI algorithm has been made for the most recent collection 5 (c5). As a result of these changes the current MODIS LAI product has not been widely validated. We present a validation of the MODIS c5 LAI product over a 121 km2 area of mixed coniferous forest in Oregon, USA, based on detailed ground measurements which we have upscaled using high resolution EO data. Our analysis suggests that c5 shows a much more realistic temporal LAI dynamic over c4 values for the site we examined. We find improved spatial consistency between the MODIS c5 LAI product and upscaled in situ measurements. However results also suggest that the c5 LAI product underestimates the upper range of upscaled in situ LAI measurements.
Resumo:
The goal of this paper is to study and further develop the orthogonality sampling or stationary waves algorithm for the detection of the location and shape of objects from the far field pattern of scattered waves in electromagnetics or acoustics. Orthogonality sampling can be seen as a special beam forming algorithm with some links to the point source method and to the linear sampling method. The basic idea of orthogonality sampling is to sample the space under consideration by calculating scalar products of the measured far field pattern , with a test function for all y in a subset Q of the space , m = 2, 3. The way in which this is carried out is important to extract the information which the scattered fields contain. The theoretical foundation of orthogonality sampling is only partly resolved, and the goal of this work is to initiate further research by numerical demonstration of the high potential of the approach. We implement the method for a two-dimensional setting for the Helmholtz equation, which represents electromagnetic scattering when the setup is independent of the third coordinate. We show reconstructions of the location and shape of objects from measurements of the scattered field for one or several directions of incidence and one or many frequencies or wave numbers, respectively. In particular, we visualize the indicator function both with the Dirichlet and Neumann boundary condition and for complicated inhomogeneous media.
Resumo:
The ground-based Atmospheric Radiation Measurement Program (ARM) and NASA Aerosol Robotic Net- work (AERONET) routinely monitor clouds using zenith ra- diances at visible and near-infrared wavelengths. Using the transmittance calculated from such measurements, we have developed a new retrieval method for cloud effective droplet size and conducted extensive tests for non-precipitating liquid water clouds. The underlying principle is to combine a liquid-water-absorbing wavelength (i.e., 1640 nm) with a non-water-absorbing wavelength for acquiring information on cloud droplet size and optical depth. For simulated stratocumulus clouds with liquid water path less than 300 g m−2 and horizontal resolution of 201 m, the retrieval method underestimates the mean effective radius by 0.8μm, with a root-mean-squared error of 1.7 μm and a relative deviation of 13%. For actual observations with a liquid water path less than 450 g m−2 at the ARM Oklahoma site during 2007– 2008, our 1.5-min-averaged retrievals are generally larger by around 1 μm than those from combined ground-based cloud radar and microwave radiometer at a 5-min temporal resolution. We also compared our retrievals to those from combined shortwave flux and microwave observations for relatively homogeneous clouds, showing that the bias between these two retrieval sets is negligible, but the error of 2.6 μm and the relative deviation of 22 % are larger than those found in our simulation case. Finally, the transmittance-based cloud effective droplet radii agree to better than 11 % with satellite observations and have a negative bias of 1 μm. Overall, the retrieval method provides reasonable cloud effective radius estimates, which can enhance the cloud products of both ARM and AERONET.
Resumo:
Metabolic syndrome is a set of disorders that increases the risk of developing cardiovascular disease. The gut microbiota is altered toward a less beneficial composition in overweight adults and this change can be accompanied by inflammation. Prebiotics such as galactooligosaccharides can positively modify the gut microbiota and immune system; some may also reduce blood lipids. We assessed the effect of a galactooligosaccharide mixture [Bi2 muno (B-GOS)] on markers of metabolic syndrome, gut microbiota, and immune function in 45 overweight adults with $3 risk factors associated with metabolic syndrome in a double-blind, randomized, placebo (maltodextrin)-controlled, crossover study (with a 4-wk wash-out period between interventions). Whole blood, saliva, feces, and anthropometric measurements were taken at the beginning, wk 6, and end of each 12-wk intervention period. Predominant groups of fecal bacteria were quantified and full blood count, markers of inflammation and lipid metabolism, insulin, and glucose were measured. B-GOS increased the number of fecal bifidobacteria at the expense of less desirable groups of bacteria. Increases in fecal secretory IgA and decreases in fecal calprotectin, plasma C-reactive protein, insulin, total cholesterol (TC), TG, and the TC:HDL cholesterol ratio were also observed. Administration of B-GOS to overweight adults resulted in positive effects on the composition of the gut microbiota, the immune response, and insulin, TC, and TG concentrations. B-GOSmay be a useful candidate for the enhancement of gastrointestinal health, immune function, and the reduction of metabolic syndrome risk factors in overweight adults.
Resumo:
Small, synthetic peptides based on specific regions of voltage-gated Ca2+ channels (VGCCs) have been widely used to study Ca2+ channel function and have been instrumental in confirming the contribution of specific amino acid sequences to interactions with putative binding partners. In particular, peptides based on the Ca2+ channel Alpha Interaction Domain (AID) on the intracellular region connecting domains I and II (the I-II loop) and the SYNaptic PRotein INTerction (synprint) site on the II-III loop have been widely used. Emerging evidence suggests that such peptides may themselves possess inherent functionality, a property that may be exploitable for future drug design. Here, we review our recent work using synthetic Ca2+ channel peptides based on sequences within the CaV2.2 amino terminal and I-II loop, originally identified as molecular determinates for G protein modulation, and their effects on VGCC function. These CaV2.2 peptides act as inhibitory modules to decrease Ca2+ influx with direct effects on VGCC gating, ultimately leading to a reduction of synaptic transmission. CaV2.2 peptides also attenuate G protein modulation of VGCCs. Amino acid substitutions generate CaV2.2 peptides with increased or decreased inhibitory effects suggesting that synthetic peptides can be used to further probe VGCC function and, potentially, form the basis for novel therapeutic development.
Resumo:
Lord Kelvin (William Thomson) made important contributions to the study of atmospheric elec- tricity during a brief but productive period from 1859–1861. By 1859 Kelvin had recognised the need for “incessant recording” of atmospheric electrical parameters, and responded by inventing both the water dropper equaliser for measuring the atmospheric potential gradient (PG), and photographic data logging. The water dropper equaliser was widely adopted internationally and is still in use today. Following theoretical consid- erations of electric field distortion by local topography, Kelvin developed a portable electrometer, using it to investigate the PG on the Scottish island of Arran. During these environmental measurements, Kelvin may have unwittingly detected atmospheric PG changes during solar activity in August / September 1859 associated with the “Carrington event”, which is interesting in the context of his later statements that solar magnetic influ- ence on the Earth was impossible. Kelvin’s atmospheric electricity work presents an early representative study in quantitative environmental physics, through the application of mathematical principles to an environmental problem, the design and construction of bespoke instrumentation for real world measurements and recognising the limitations of the original theoretical view revealed by experimental work
Resumo:
Modern age samples from various depositional environments were examined for signal resetting. For 19 modern aeolian/beach samples all De values obtained were View the MathML source, with ∼70% having View the MathML source. For 21 fluvial/colluvial samples, all De values were View the MathML source with ∼80% being View the MathML source. De as a function of illumination (OSL measurement) time (De(t)) plots were examined for all samples. Based on previous laboratory experiments, increases in De(t) were expected for partially reset samples, and constant De(t) for fully reset samples. All aeolian samples, both modern age and additional ‘young’ samples (<1000 years), showed constant (flat) De(t) while all modern, non-zero De, fluvial/colluvial samples showed increasing De(t). ‘Replacement plots’, where a regenerated signal is substituted for the natural, yielded constant (flat) De(t). These findings support strongly the use of De(t) as a method of identifying incomplete resetting in fluvial samples. Potential complicating factors, such as illumination (bleaching) spectrum, thermal instability and component composition are discussed and a series of internal checks on the applicability of the De(t) for each individual aliquot/grain level are outlined.