945 resultados para Surface wave methods
Resumo:
The alignment of model amyloid peptide YYKLVFFC is investigated in bulk and at a solid surface using a range of spectroscopic methods employing polarized radiation. The peptide is based on a core sequence of the amyloid beta (A beta) peptide, KLVFF. The attached tyrosine and cysteine units are exploited to yield information on alignment and possible formation of disulfide or dityrosine links. Polarized Raman spectroscopy on aligned stalks provides information on tyrosine orientation, which complements data from linear dichroism (LD) on aqueous solutions subjected to shear in a Couette cell. LD provides a detailed picture of alignment of peptide strands and aromatic residues and was also used to probe the kinetics of self-assembly. This suggests initial association of phenylalanine residues, followed by subsequent registry of strands and orientation of tyrosine residues. X-ray diffraction (XRD) data from aligned stalks is used to extract orientational order parameters from the 0.48 nm reflection in the cross-beta pattern, from which an orientational distribution function is obtained. X-ray diffraction on solutions subject to capillary flow confirmed orientation in situ at the level of the cross-beta pattern. The information on fibril and tyrosine orientation from polarized Raman spectroscopy is compared with results from NEXAFS experiments on samples prepared as films on silicon. This indicates fibrils are aligned parallel to the surface, with phenyl ring normals perpendicular to the surface. Possible disulfide bridging leading to peptide dimer formation was excluded by Raman spectroscopy, whereas dityrosine formation was probed by fluorescence experiments and was found not to occur except under alkaline conditions. Congo red binding was found not to influence the cross-beta XRD pattern.
An assessment of aerosol‐cloud interactions in marine stratus clouds based on surface remote sensing
Resumo:
An assessment of aerosol-cloud interactions (ACI) from ground-based remote sensing under coastal stratiform clouds is presented. The assessment utilizes a long-term, high temporal resolution data set from the Atmospheric Radiation Measurement (ARM) Program deployment at Pt. Reyes, California, United States, in 2005 to provide statistically robust measures of ACI and to characterize the variability of the measures based on variability in environmental conditions and observational approaches. The average ACIN (= dlnNd/dlna, the change in cloud drop number concentration with aerosol concentration) is 0.48, within a physically plausible range of 0–1.0. Values vary between 0.18 and 0.69 with dependence on (1) the assumption of constant cloud liquid water path (LWP), (2) the relative value of cloud LWP, (3) methods for retrieving Nd, (4) aerosol size distribution, (5) updraft velocity, and (6) the scale and resolution of observations. The sensitivity of the local, diurnally averaged radiative forcing to this variability in ACIN values, assuming an aerosol perturbation of 500 c-3 relative to a background concentration of 100 cm-3, ranges betwee-4 and -9 W -2. Further characterization of ACI and its variability is required to reduce uncertainties in global radiative forcing estimates.
Resumo:
A large number of urban surface energy balance models now exist with different assumptions about the important features of the surface and exchange processes that need to be incorporated. To date, no com- parison of these models has been conducted; in contrast, models for natural surfaces have been compared extensively as part of the Project for Intercomparison of Land-surface Parameterization Schemes. Here, the methods and first results from an extensive international comparison of 33 models are presented. The aim of the comparison overall is to understand the complexity required to model energy and water exchanges in urban areas. The degree of complexity included in the models is outlined and impacts on model performance are discussed. During the comparison there have been significant developments in the models with resulting improvements in performance (root-mean-square error falling by up to two-thirds). Evaluation is based on a dataset containing net all-wave radiation, sensible heat, and latent heat flux observations for an industrial area in Vancouver, British Columbia, Canada. The aim of the comparison is twofold: to identify those modeling ap- proaches that minimize the errors in the simulated fluxes of the urban energy balance and to determine the degree of model complexity required for accurate simulations. There is evidence that some classes of models perform better for individual fluxes but no model performs best or worst for all fluxes. In general, the simpler models perform as well as the more complex models based on all statistical measures. Generally the schemes have best overall capability to model net all-wave radiation and least capability to model latent heat flux.
Resumo:
The fabrication and characterization of micromachined reduced-height air-filled rectangular waveguide components suitable for integration is reported in this paper. The lithographic technique used permits structures with heights of up to 100 μm to be successfully constructed in a repeatable manner. Waveguide S-parameter measurements at frequencies between 75-110 GHz using a vector network analyzer demonstrate low loss propagation in the TE10 mode reaching 0.2 dB per wavelength. Scanning electron microscope photographs of conventional and micromachined waveguides show that the fabrication technique can provide a superior surface finish than possible with commercially available components. In order to circumvent problems in efficiently coupling free-space propagating beams to the reduced-height G-band waveguides, as well as to characterize them using quasi-optical techniques, a novel integrated micromachined slotted horn antenna has been designed and fabricated, E-, H-, and D-plane far-field antenna pattern measurements at different frequencies using a quasi-optical setup show that the fabricated structures are optimized for 180-GHz operation with an E-plane half-power beamwidth of 32° elevated 35° above the substrate, a symmetrical H-plane pattern with a half-power beamwidth of 23° and a maximum D-plane cross-polar level of -33 dB. Far-field pattern simulations using HFSS show good agreement with experimental results.
The dependence of clear-sky outgoing longwave radiation on surface temperature and relative humidity
Resumo:
A simulation of the earth's clear-sky long-wave radiation budget is used to examine the dependence of clear-sky outgoing long-wave radiation (OLR) on surface temperature and relative humidity. the simulation uses the European Centre for Medium-Range Weather Forecasts global reanalysed fields to calculate clear-sky OLR over the period from January 1979 to December 1993, thus allowing the seasonal and interannual time-scales to be resolved. the clear-sky OLR is shown to be primarily dependent on temperature changes at high latitudes and on changes in relative humidity at lower latitudes. Regions exhibiting a ‘super-greenhouse’ effect are identified and are explained by considering the changes in the convective regime associated with the Hadley circulation over the seasonal cycle, and with the Walker circulation over the interannual time-scale. the sensitivity of clear-sky OLR to changes in relative humidity diminishes with increasing relative humidity. This is explained by the increasing saturation of the water-vapour absorption bands with increased moisture. By allowing the relative humidity to vary in specified vertical slabs of the troposphere over an interannual time-scale it is shown that changes in humidity in the mid troposphere (400 to 700 hPa) are of most importance in explaining clear-sky OLR variations. Relative humidity variations do not appear to affect the positive thermodynamic water-vapour feedback significantly in response to surface temperature changes.
Resumo:
We explore the potential for making statistical decadal predictions of sea surface temperatures (SSTs) in a perfect model analysis, with a focus on the Atlantic basin. Various statistical methods (Lagged correlations, Linear Inverse Modelling and Constructed Analogue) are found to have significant skill in predicting the internal variability of Atlantic SSTs for up to a decade ahead in control integrations of two different global climate models (GCMs), namely HadCM3 and HadGEM1. Statistical methods which consider non-local information tend to perform best, but which is the most successful statistical method depends on the region considered, GCM data used and prediction lead time. However, the Constructed Analogue method tends to have the highest skill at longer lead times. Importantly, the regions of greatest prediction skill can be very different to regions identified as potentially predictable from variance explained arguments. This finding suggests that significant local decadal variability is not necessarily a prerequisite for skillful decadal predictions, and that the statistical methods are capturing some of the dynamics of low-frequency SST evolution. In particular, using data from HadGEM1, significant skill at lead times of 6–10 years is found in the tropical North Atlantic, a region with relatively little decadal variability compared to interannual variability. This skill appears to come from reconstructing the SSTs in the far north Atlantic, suggesting that the more northern latitudes are optimal for SST observations to improve predictions. We additionally explore whether adding sub-surface temperature data improves these decadal statistical predictions, and find that, again, it depends on the region, prediction lead time and GCM data used. Overall, we argue that the estimated prediction skill motivates the further development of statistical decadal predictions of SSTs as a benchmark for current and future GCM-based decadal climate predictions.
Resumo:
Satellite data are used to quantify and examine the bias in the outgoing long-wave (LW) radiation over North Africa during May–July simulated by a range of climate models and the Met Office global numerical weather prediction (NWP) model. Simulations from an ensemble-mean of multiple climate models overestimate outgoing clear-sky long-wave radiation (LWc) by more than 20 W m−2 relative to observations from Clouds and the Earth's Radiant Energy System (CERES) for May–July 2000 over parts of the west Sahara, and by 9 W m−2 for the North Africa region (20°W–30°E, 10–40°N). Experiments with the atmosphere-only version of the High-resolution Hadley Centre Global Environment Model (HiGEM), suggest that including mineral dust radiative effects removes this bias. Furthermore, only by reducing surface temperature and emissivity by unrealistic amounts is it possible to explain the magnitude of the bias. Comparing simulations from the Met Office NWP model with satellite observations from Geostationary Earth Radiation Budget (GERB) instruments suggests that the model overestimates the LW by 20–40 W m−2 during North African summer. The bias declines over the period 2003–2008, although this is likely to relate to improvements in the model and inhomogeneity in the satellite time series. The bias in LWc coincides with high aerosol dust loading estimated from the Ozone Monitoring Instrument (OMI), including during the GERBILS field campaign (18–28 June 2007) where model overestimates in LWc greater than 20 W m−2 and OMI-estimated aerosol optical depth (AOD) greater than 0.8 are concurrent around 20°N, 0–20°W. A model-minus-GERB LW bias of around 30 W m−2 coincides with high AOD during the period 18–21 June 2007, although differences in cloud cover also impact the model–GERB differences. Copyright © Royal Meteorological Society and Crown Copyright, 2010
Resumo:
The applicability of BET model for calculation of surface area of activated carbons is checked by using molecular simulations. By calculation of geometric surface areas for the simple model carbon slit-like pore with the increasing width, and by comparison of the obtained values with those for the same systems from the VEGA ZZ package (adsorbate-accessible molecular surface), it is shown that the latter methods provide correct values. For the system where a monolayer inside a pore is created the ASA approach (GCMC, Ar, T = 87 K) underestimates the value of surface area for micropores (especially, where only one layer is observed and/or two layers of adsorbed Ar are formed). Therefore, we propose the modification of this method based on searching the relationship between the pore diameter and the number of layers in a pore. Finally BET; original andmodified ASA; and A, B and C-point surface areas are calculated for a series of virtual porous carbons using simulated Ar adsorption isotherms (GCMC and T = 87 K). The comparison of results shows that the BET method underestimates and not, as it was usually postulated, overestimates the surface areas of microporous carbons.
Resumo:
Background: Thiol isomerases are a family of endoplasmic reticulum enzymes which orchestrate redox-based modifications of protein disulphide bonds. Previous studies have identified important roles for the thiol isomerases PDI and ERp5 in the regulation of normal platelet function. Objectives: Recently, we demonstrated the presence of a further five thiol isomerases at the platelet surface. In this report we aim to report the role of one of these enzymes - ERp57 in the regulation of platelet function. Methods/Results: Using enzyme activity function blocking antibodies, we demonstrate a role for ERp57 in platelet aggregation, dense granule secretion, fibrinogen binding, calcium mobilisation and thrombus formation under arterial conditions. In addition to the effects of ERp57 on isolated platelets, we observe the presence of ERp57 in the developing thrombus in vivo. Furthermore the inhibition of ERp57 function was found to reduce laser-injury induced arterial thrombus formation in a murine model of thrombosis. Conclusions: These data suggest that ERp57 is important for normal platelet function and opens up the possibility that the regulation of platelet function by a range of cell surface thiol isomerases may represent a broad paradigm for the regulation of haemostasis and thrombosis.
Resumo:
A statistical methodology is proposed and tested for the analysis of extreme values of atmospheric wave activity at mid-latitudes. The adopted methods are the classical block-maximum and peak over threshold, respectively based on the generalized extreme value (GEV) distribution and the generalized Pareto distribution (GPD). Time-series of the ‘Wave Activity Index’ (WAI) and the ‘Baroclinic Activity Index’ (BAI) are computed from simulations of the General Circulation Model ECHAM4.6, which is run under perpetual January conditions. Both the GEV and the GPD analyses indicate that the extremes ofWAI and BAI areWeibull distributed, this corresponds to distributions with an upper bound. However, a remarkably large variability is found in the tails of such distributions; distinct simulations carried out under the same experimental setup provide sensibly different estimates of the 200-yr WAI return level. The consequences of this phenomenon in applications of the methodology to climate change studies are discussed. The atmospheric configurations characteristic of the maxima and minima of WAI and BAI are also examined.
Resumo:
In this paper, we extend to the time-harmonic Maxwell equations the p-version analysis technique developed in [R. Hiptmair, A. Moiola and I. Perugia, Plane wave discontinuous Galerkin methods for the 2D Helmholtz equation: analysis of the p-version, SIAM J. Numer. Anal., 49 (2011), 264-284] for Trefftz-discontinuous Galerkin approximations of the Helmholtz problem. While error estimates in a mesh-skeleton norm are derived parallel to the Helmholtz case, the derivation of estimates in a mesh-independent norm requires new twists in the duality argument. The particular case where the local Trefftz approximation spaces are built of vector-valued plane wave functions is considered, and convergence rates are derived.
Resumo:
In this article we describe recent progress on the design, analysis and implementation of hybrid numerical-asymptotic boundary integral methods for boundary value problems for the Helmholtz equation that model time harmonic acoustic wave scattering in domains exterior to impenetrable obstacles. These hybrid methods combine conventional piecewise polynomial approximations with high-frequency asymptotics to build basis functions suitable for representing the oscillatory solutions. They have the potential to solve scattering problems accurately in a computation time that is (almost) independent of frequency and this has been realized for many model problems. The design and analysis of this class of methods requires new results on the analysis and numerical analysis of highly oscillatory boundary integral operators and on the high-frequency asymptotics of scattering problems. The implementation requires the development of appropriate quadrature rules for highly oscillatory integrals. This article contains a historical account of the development of this currently very active field, a detailed account of recent progress and, in addition, a number of original research results on the design, analysis and implementation of these methods.
Resumo:
Specific traditional plate count method and real-time PCR systems based on SYBR Green I and TaqMan technologies using a specific primer pair and probe for amplification of iap-gene were used for quantitative assay of Listeria monocytogenes in seven decimal serial dilution series of nutrient broth and milk samples containing 1.58 to 1.58×107 cfu /ml and the real-time PCR methods were compared with the plate count method with respect to accuracy and sensitivity. In this study, the plate count method was performed using surface-plating of 0.1 ml of each sample on Palcam Agar. The lowest detectable level for this method was 1.58×10 cfu/ml for both nutrient broth and milk samples. Using purified DNA as a template for generation of standard curves, as few as four copies of the iap-gene could be detected per reaction with both real-time PCR assays, indicating that they were highly sensitive. When these real-time PCR assays were applied to quantification of L. monocytogenes in decimal serial dilution series of nutrient broth and milk samples, 3.16×10 to 3.16×105 copies per reaction (equals to 1.58×103 to 1.58×107 cfu/ml L. monocytogenes) were detectable. As logarithmic cycles, for Plate Count and both molecular assays, the quantitative results of the detectable steps were similar to the inoculation levels.
Resumo:
Accurate decadal climate predictions could be used to inform adaptation actions to a changing climate. The skill of such predictions from initialised dynamical global climate models (GCMs) may be assessed by comparing with predictions from statistical models which are based solely on historical observations. This paper presents two benchmark statistical models for predicting both the radiatively forced trend and internal variability of annual mean sea surface temperatures (SSTs) on a decadal timescale based on the gridded observation data set HadISST. For both statistical models, the trend related to radiative forcing is modelled using a linear regression of SST time series at each grid box on the time series of equivalent global mean atmospheric CO2 concentration. The residual internal variability is then modelled by (1) a first-order autoregressive model (AR1) and (2) a constructed analogue model (CA). From the verification of 46 retrospective forecasts with start years from 1960 to 2005, the correlation coefficient for anomaly forecasts using trend with AR1 is greater than 0.7 over parts of extra-tropical North Atlantic, the Indian Ocean and western Pacific. This is primarily related to the prediction of the forced trend. More importantly, both CA and AR1 give skillful predictions of the internal variability of SSTs in the subpolar gyre region over the far North Atlantic for lead time of 2 to 5 years, with correlation coefficients greater than 0.5. For the subpolar gyre and parts of the South Atlantic, CA is superior to AR1 for lead time of 6 to 9 years. These statistical forecasts are also compared with ensemble mean retrospective forecasts by DePreSys, an initialised GCM. DePreSys is found to outperform the statistical models over large parts of North Atlantic for lead times of 2 to 5 years and 6 to 9 years, however trend with AR1 is generally superior to DePreSys in the North Atlantic Current region, while trend with CA is superior to DePreSys in parts of South Atlantic for lead time of 6 to 9 years. These findings encourage further development of benchmark statistical decadal prediction models, and methods to combine different predictions.
Resumo:
The task of this paper is to develop a Time-Domain Probe Method for the reconstruction of impenetrable scatterers. The basic idea of the method is to use pulses in the time domain and the time-dependent response of the scatterer to reconstruct its location and shape. The method is based on the basic causality principle of timedependent scattering. The method is independent of the boundary condition and is applicable for limited aperture scattering data. In particular, we discuss the reconstruction of the shape of a rough surface in three dimensions from time-domain measurements of the scattered field. In practise, measurement data is collected where the incident field is given by a pulse. We formulate the time-domain fieeld reconstruction problem equivalently via frequency-domain integral equations or via a retarded boundary integral equation based on results of Bamberger, Ha-Duong, Lubich. In contrast to pure frequency domain methods here we use a time-domain characterization of the unknown shape for its reconstruction. Our paper will describe the Time-Domain Probe Method and relate it to previous frequency-domain approaches on sampling and probe methods by Colton, Kirsch, Ikehata, Potthast, Luke, Sylvester et al. The approach significantly extends recent work of Chandler-Wilde and Lines (2005) and Luke and Potthast (2006) on the timedomain point source method. We provide a complete convergence analysis for the method for the rough surface scattering case and provide numerical simulations and examples.