923 resultados para Nekrassov–Mehmke 2 method – (NM2)
Resumo:
A relatively simple, selective, precise and accurate high performance liquid chromatography (HPLC) method based on a reaction of phenylisothiocyanate (PITC) with glucosamine (GL) in alkaline media was developed and validated to determine glucosamine hydrochloride permeating through human skin in vitro. It is usually problematic to develop an accurate assay for chemicals traversing skin because the excellent barrier properties of the tissue ensure that only low amounts of the material pass through the membrane and skin components may leach out of the tissue to interfere with the analysis. In addition, in the case of glucosamine hydrochloride, chemical instability adds further complexity to assay development. The assay, utilising the PITC-GL reaction was refined by optimizing the reaction temperature, reaction time and PITC concentration. The reaction produces a phenylthiocarbarnyl-glucosamine (PTC-GL) adduct which was separated on a reverse-phase (RP) column packed with 5 mu m ODS (C-18) Hypersil particles using a diode array detector (DAD) at 245 nm. The mobile phase was methanol-water-glacial acetic acid (10:89.96:0.04 v/v/v, pH 3.5) delivered to the column at 1 ml min(-1) and the column temperature was maintained at 30 degrees C Using a saturated aqueous solution of glucosamine hydrochloride, in vitro permeation studies were performed at 32 +/- 1 degrees C over 48 h using human epidermal membranes prepared by a heat separation method and mounted in Franz-type diffusion cells with a diffusional area 2.15 +/- 0.1 cm(2). The optimum derivatisation reaction conditions for reaction temperature, reaction time and PITC concentration were found to be 80 degrees C, 30 min and 1 % v/v, respectively. PTC-Gal and GL adducts eluted at 8.9 and 9.7 min, respectively. The detector response was found to be linear in the concentration range 0-1000 mu g ml(-1). The assay was robust with intra- and inter-day precisions (described as a percentage of relative standard deviation, %R.S.D.) < 12. Intra- and inter-day accuracy (as a percentage of the relative error, %RE) was <=-5.60 and <=-8.00, respectively. Using this assay, it was found that GL-HCI permeates through human skin with a flux 1.497 +/- 0.42 mu g cm(-2) h(-1), a permeability coefficient of 5.66 +/- 1.6 x 10(-6) cm h(-1) and with a lag time of 10.9 +/- 4.6 h. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Background and purpose: Low efficacy partial agonists at the D-2 dopamine receptor may be useful for treating schizophrenia. In this report we describe a method for assessing the efficacy of these compounds based on stimulation of [S-35]GTP gamma S binding. Experimental approach: Agonist efficacy was assessed from [S-35]GTP gamma S binding to membranes of CHO cells expressing D2 dopamine receptors in buffers with and without Na+. Effects of Na+ on receptor/G protein coupling were assessed using agonist/[H-3] spiperone competition binding assays. Key results: When [S-35]GTP gamma S binding assays were performed in buffers containing Na+, some agonists (aripiprazole, AJ-76, UH-232) exhibited very low efficacy whereas other agonists exhibited measurable efficacy. When Na+ was substituted by N-methyl D-glucamine, the efficacy of all agonists increased (relative to that of dopamine) but particularly for aripiprazole, aplindore, AJ-76, (-)-3-PPP and UH-232. In ligand binding assays, substitution of Na+ by N-methyl D-glucamine increased receptor/G protein coupling for some agonists -. aplindore, dopamine and (-)-3-PPP-but for aripiprazole, AJ-76 and UH-232 there was little effect on receptor/G protein coupling. Conclusions and implications: Substitution of Na+ by NMDG increases sensitivity in [S-35] GTPgS binding assays so that very low efficacy agonists were detected clearly. For some agonists the effect seems to be mediated via enhanced receptor/G protein coupling whereas for others the effect is mediated at another point in the G protein activation cycle. AJ-76, aripiprazole and UH-232 seem particularly sensitive to this change in assay conditions. This work provides a new method to discover these very low efficacy agonists.
Resumo:
Background: Postnatal depression is associated with adverse child cognitive and socio-emotional outcome. It is not known whether psychological treatment affects the quality of the mother-child relationship and child outcome. Aims: To evaluate the effect of three psychological treatments on the mother-child relationship and child outcome. Method: Women with post-partum depression (n=193) were assigned randomly to routine primary care, non-directive counselling, cognitive-behavioural therapy or psychodynamic therapy The women and their children, were assessed at 43, [8 and 60 months post-partum. Results: Indications of a positive benefit were limited. All three treatments had a significant benefit on maternal reports of early difficulties in relationships with the infants, counselling gave better infant emotional and behaviour ratings at 18 months and more sensitive early mother-infant interactions. The treatments had no significant impact on maternal management of early infant behaviour problems, security of infant-mother attachment. Infant cognitive development or any child outcome at 5 years. Conclusions: Early intervention was of short-term benefit to the mother-child relationship and infant behaviour problems. More-prolonged intervention may be needed. Health visitors could deliver this.
Resumo:
Binocular disparity, blur, and proximal cues drive convergence and accommodation. Disparity is considered to be the main vergence cue and blur the main accommodation cue. We have developed a remote haploscopic photorefractor to measure simultaneous vergence and accommodation objectively in a wide range of participants of all ages while fixating targets at between 0.3 and 2 m. By separating the three main near cues, we can explore their relative weighting in three-, two-, one-, and zero-cue conditions. Disparity can be manipulated by remote occlusion; blur cues manipulated by using either a Gabor patch or a detailed picture target; looming cues by either scaling or not scaling target size with distance. In normal orthophoric, emmetropic, symptom-free, naive visually mature participants, disparity was by far the most significant cue to both vergence and accommodation. Accommodation responses dropped dramatically if disparity was not available. Blur only had a clinically significant effect when disparity was absent. Proximity had very little effect. There was considerable interparticipant variation. We predict that relative weighting of near cue use is likely to vary between clinical groups and present some individual cases as examples. We are using this naturalistic tool to research strabismus, vergence and accommodation development, and emmetropization.
Resumo:
We present a stochastic approach for solving the quantum-kinetic equation introduced in Part I. A Monte Carlo method based on backward time evolution of the numerical trajectories is developed. The computational complexity and the stochastic error are investigated numerically. Variance reduction techniques are applied, which demonstrate a clear advantage with respect to the approaches based on symmetry transformation. Parallel implementation is realized on a GRID infrastructure.
Resumo:
Assimilation of physical variables into coupled physical/biogeochemical models poses considerable difficulties. One problem is that data assimilation can break relationships between physical and biological variables. As a consequence, biological tracers, especially nutrients, are incorrectly displaced in the vertical, resulting in unrealistic biogeochemical fields. To prevent this, we present the idea of applying an increment to the nutrient field within a data assimilating model to ensure that nutrient-potential density relationships are maintained within a water column during assimilation. After correcting the nutrients, it is assumed that other biological variables rapidly adjust to the corrected nutrient fields. We applied this method to a 17 year run of the 2° NEMO ocean-ice model coupled to the PlankTOM5 ecosystem model. Results were compared with a control with no assimilation, and with a model with physical assimilation but no nutrient increment. In the nutrient incrementing experiment, phosphate distributions were improved both at high latitudes and at the equator. At midlatitudes, assimilation generated unrealistic advective upwelling of nutrients within the boundary currents, which spread into the subtropical gyres resulting in more biased nutrient fields. This result was largely unaffected by the nutrient increment and is probably due to boundary currents being poorly resolved in a 2° model. Changes to nutrient distributions fed through into other biological parameters altering primary production, air-sea CO2 flux, and chlorophyll distributions. These secondary changes were most pronounced in the subtropical gyres and at the equator, which are more nutrient limited than high latitudes.
Resumo:
The correlated k-distribution (CKD) method is widely used in the radiative transfer schemes of atmospheric models and involves dividing the spectrum into a number of bands and then reordering the gaseous absorption coefficients within each one. The fluxes and heating rates for each band may then be computed by discretizing the reordered spectrum into of order 10 quadrature points per major gas and performing a monochromatic radiation calculation for each point. In this presentation it is shown that for clear-sky longwave calculations, sufficient accuracy for most applications can be achieved without the need for bands: reordering may be performed on the entire longwave spectrum. The resulting full-spectrum correlated k (FSCK) method requires significantly fewer monochromatic calculations than standard CKD to achieve a given accuracy. The concept is first demonstrated by comparing with line-by-line calculations for an atmosphere containing only water vapor, in which it is shown that the accuracy of heating-rate calculations improves approximately in proportion to the square of the number of quadrature points. For more than around 20 points, the root-mean-squared error flattens out at around 0.015 K/day due to the imperfect rank correlation of absorption spectra at different pressures in the profile. The spectral overlap of m different gases is treated by considering an m-dimensional hypercube where each axis corresponds to the reordered spectrum of one of the gases. This hypercube is then divided up into a number of volumes, each approximated by a single quadrature point, such that the total number of quadrature points is slightly fewer than the sum of the number that would be required to treat each of the gases separately. The gaseous absorptions for each quadrature point are optimized such that they minimize a cost function expressing the deviation of the heating rates and fluxes calculated by the FSCK method from line-by-line calculations for a number of training profiles. This approach is validated for atmospheres containing water vapor, carbon dioxide, and ozone, in which it is found that in the troposphere and most of the stratosphere, heating-rate errors of less than 0.2 K/day can be achieved using a total of 23 quadrature points, decreasing to less than 0.1 K/day for 32 quadrature points. It would be relatively straightforward to extend the method to include other gases.
Resumo:
A method of estimating dissipation rates from a vertically pointing Doppler lidar with high temporal and spatial resolution has been evaluated by comparison with independent measurements derived from a balloon-borne sonic anemometer. This method utilizes the variance of the mean Doppler velocity from a number of sequential samples and requires an estimate of the horizontal wind speed. The noise contribution to the variance can be estimated from the observed signal-to-noise ratio and removed where appropriate. The relative size of the noise variance to the observed variance provides a measure of the confidence in the retrieval. Comparison with in situ dissipation rates derived from the balloon-borne sonic anemometer reveal that this particular Doppler lidar is capable of retrieving dissipation rates over a range of at least three orders of magnitude. This method is most suitable for retrieval of dissipation rates within the convective well-mixed boundary layer where the scales of motion that the Doppler lidar probes remain well within the inertial subrange. Caution must be applied when estimating dissipation rates in more quiescent conditions. For the particular Doppler lidar described here, the selection of suitably short integration times will permit this method to be applicable in such situations but at the expense of accuracy in the Doppler velocity estimates. The two case studies presented here suggest that, with profiles every 4 s, reliable estimates of ϵ can be derived to within at least an order of magnitude throughout almost all of the lowest 2 km and, in the convective boundary layer, to within 50%. Increasing the integration time for individual profiles to 30 s can improve the accuracy substantially but potentially confines retrievals to within the convective boundary layer. Therefore, optimization of certain instrument parameters may be required for specific implementations.
Resumo:
The correlated k-distribution (CKD) method is widely used in the radiative transfer schemes of atmospheric models, and involves dividing the spectrum into a number of bands and then reordering the gaseous absorption coefficients within each one. The fluxes and heating rates for each band may then be computed by discretizing the reordered spectrum into of order 10 quadrature points per major gas, and performing a pseudo-monochromatic radiation calculation for each point. In this paper it is first argued that for clear-sky longwave calculations, sufficient accuracy for most applications can be achieved without the need for bands: reordering may be performed on the entire longwave spectrum. The resulting full-spectrum correlated k (FSCK) method requires significantly fewer pseudo-monochromatic calculations than standard CKD to achieve a given accuracy. The concept is first demonstrated by comparing with line-by-line calculations for an atmosphere containing only water vapor, in which it is shown that the accuracy of heating-rate calculations improves approximately in proportion to the square of the number of quadrature points. For more than around 20 points, the root-mean-squared error flattens out at around 0.015 K d−1 due to the imperfect rank correlation of absorption spectra at different pressures in the profile. The spectral overlap of m different gases is treated by considering an m-dimensional hypercube where each axis corresponds to the reordered spectrum of one of the gases. This hypercube is then divided up into a number of volumes, each approximated by a single quadrature point, such that the total number of quadrature points is slightly fewer than the sum of the number that would be required to treat each of the gases separately. The gaseous absorptions for each quadrature point are optimized such they minimize a cost function expressing the deviation of the heating rates and fluxes calculated by the FSCK method from line-by-line calculations for a number of training profiles. This approach is validated for atmospheres containing water vapor, carbon dioxide and ozone, in which it is found that in the troposphere and most of the stratosphere, heating-rate errors of less than 0.2 K d−1 can be achieved using a total of 23 quadrature points, decreasing to less than 0.1 K d−1 for 32 quadrature points. It would be relatively straightforward to extend the method to include other gases.
Resumo:
In this paper the meteorological processes responsible for transporting tracer during the second ETEX (European Tracer EXperiment) release are determined using the UK Met Office Unified Model (UM). The UM predicted distribution of tracer is also compared with observations from the ETEX campaign. The dominant meteorological process is a warm conveyor belt which transports large amounts of tracer away from the surface up to a height of 4 km over a 36 h period. Convection is also an important process, transporting tracer to heights of up to 8 km. Potential sources of error when using an operational numerical weather prediction model to forecast air quality are also investigated. These potential sources of error include model dynamics, model resolution and model physics. In the UM a semi-Lagrangian monotonic advection scheme is used with cubic polynomial interpolation. This can predict unrealistic negative values of tracer which are subsequently set to zero, and hence results in an overprediction of tracer concentrations. In order to conserve mass in the UM tracer simulations it was necessary to include a flux corrected transport method. Model resolution can also affect the accuracy of predicted tracer distributions. Low resolution simulations (50 km grid length) were unable to resolve a change in wind direction observed during ETEX 2, this led to an error in the transport direction and hence an error in tracer distribution. High resolution simulations (12 km grid length) captured the change in wind direction and hence produced a tracer distribution that compared better with the observations. The representation of convective mixing was found to have a large effect on the vertical transport of tracer. Turning off the convective mixing parameterisation in the UM significantly reduced the vertical transport of tracer. Finally, air quality forecasts were found to be sensitive to the timing of synoptic scale features. Errors in the position of the cold front relative to the tracer release location of only 1 h resulted in changes in the predicted tracer concentrations that were of the same order of magnitude as the absolute tracer concentrations.
Resumo:
A Kriging interpolation method is combined with an object-based evaluation measure to assess the ability of the UK Met Office's dispersion and weather prediction models to predict the evolution of a plume of tracer as it was transported across Europe. The object-based evaluation method, SAL, considers aspects of the Structure, Amplitude and Location of the pollutant field. The SAL method is able to quantify errors in the predicted size and shape of the pollutant plume, through the structure component, the over- or under-prediction of the pollutant concentrations, through the amplitude component, and the position of the pollutant plume, through the location component. The quantitative results of the SAL evaluation are similar for both models and close to a subjective visual inspection of the predictions. A negative structure component for both models, throughout the entire 60 hour plume dispersion simulation, indicates that the modelled plumes are too small and/or too peaked compared to the observed plume at all times. The amplitude component for both models is strongly positive at the start of the simulation, indicating that surface concentrations are over-predicted by both models for the first 24 hours, but modelled concentrations are within a factor of 2 of the observations at later times. Finally, for both models, the location component is small for the first 48 hours after the start of the tracer release, indicating that the modelled plumes are situated close to the observed plume early on in the simulation, but this plume location error grows at later times. The SAL methodology has also been used to identify differences in the transport of pollution in the dispersion and weather prediction models. The convection scheme in the weather prediction model is found to transport more pollution vertically out of the boundary layer into the free troposphere than the dispersion model convection scheme resulting in lower pollutant concentrations near the surface and hence a better forecast for this case study.
Resumo:
A revised Bayesian algorithm for estimating surface rain rate, convective rain proportion, and latent heating profiles from satellite-borne passive microwave radiometer observations over ocean backgrounds is described. The algorithm searches a large database of cloud-radiative model simulations to find cloud profiles that are radiatively consistent with a given set of microwave radiance measurements. The properties of these radiatively consistent profiles are then composited to obtain best estimates of the observed properties. The revised algorithm is supported by an expanded and more physically consistent database of cloud-radiative model simulations. The algorithm also features a better quantification of the convective and nonconvective contributions to total rainfall, a new geographic database, and an improved representation of background radiances in rain-free regions. Bias and random error estimates are derived from applications of the algorithm to synthetic radiance data, based upon a subset of cloud-resolving model simulations, and from the Bayesian formulation itself. Synthetic rain-rate and latent heating estimates exhibit a trend of high (low) bias for low (high) retrieved values. The Bayesian estimates of random error are propagated to represent errors at coarser time and space resolutions, based upon applications of the algorithm to TRMM Microwave Imager (TMI) data. Errors in TMI instantaneous rain-rate estimates at 0.5°-resolution range from approximately 50% at 1 mm h−1 to 20% at 14 mm h−1. Errors in collocated spaceborne radar rain-rate estimates are roughly 50%–80% of the TMI errors at this resolution. The estimated algorithm random error in TMI rain rates at monthly, 2.5° resolution is relatively small (less than 6% at 5 mm day−1) in comparison with the random error resulting from infrequent satellite temporal sampling (8%–35% at the same rain rate). Percentage errors resulting from sampling decrease with increasing rain rate, and sampling errors in latent heating rates follow the same trend. Averaging over 3 months reduces sampling errors in rain rates to 6%–15% at 5 mm day−1, with proportionate reductions in latent heating sampling errors.
Resumo:
The night-time atmospheric chemistry of the biogenic volatile organic compounds (Z)-hex-4-en-1-ol, (Z)-hex-3-en-1-ol ('leaf alcohol'), (E)-hex-3-en-1-ol, (Z)-hex-2-en-1-ol and (E)-hex-2-en-1-ol, has been studied at room temperature. Rate coefficients for reactions of the nitrate radical (NO3) with these stress-induced plant emissions were measured using the discharge-flow technique. We employed off-axis continuous-wave cavity-enhanced absorption spectroscopy (CEAS) for the detection of NO3, which enabled us to work in excess of the hexenol compounds over NO3. The rate coefficients determined were (2.93 +/- 0.58) x 10(-13) cm(3) molecule(-1) s(-1), (2.67 +/- 0.42) x 10(-13) cm(3) molecule(-1) s(-1), (4.43 +/- 0.91) x 10(-13) cm(3) molecule(-1) s(-1), (1.56 +/- 0.24) x 10(-13) cm(3) molecule(-1) s(-1), and (1.30 +/- 0.24) x 10(-13) cm(3) molecule(-1) s(-1) for (Z)-hex-4-en-1-ol, (Z)-hex-3en-1-ol, (E)-hex-3-en-1-ol, (Z)-hex-2-en-1-ol and (E)-hex-2-en-1-ol. The rate coefficient for the reaction of NO3 with (Z)-hex-3-en-1-ol agrees with the single published determination of the rate coefficient using a relative method. The other rate coefficients have not been measured before and are compared to estimated values. Relative-rate studies were also performed, but required modification of the standard technique because N2O5 (used as the source of NO3) itself reacts with the hexenols. We used varying excesses of NO2 to determine simultaneously rate coefficients for reactions of NO3 and N2O5 with (E)-hex-3-en-1-ol of (5.2 +/- 1.8) x 10(-13) cm(3) molecule(-1) s(-1) and (3.1 +/- 2.3) x 10(-18) cm(3) molecule(-1) s(-1). Our new determinations suggest atmospheric lifetimes with respect to NO3-initiated oxidation of roughly 1-4 h for the hexenols, comparable with lifetimes estimated for the atmospheric degradation by OH and shorter lifetimes than for attack by O-3. Recent measurements of [N2O5] suggest that the gas-phase reactions of N2O5 with unsaturated alcohols will not be of importance under usual atmospheric conditions, but they certainly can be in laboratory systems when determining rate coefficients.
Resumo:
Analysis and modeling of X-ray and neutron Bragg and total diffraction data show that the compounds referred to in the literature as “Pd(CN)2”and“Pt(CN)2” are nanocrystalline materials containing of small sheets of vertex-sharing square-planar M(CN)4 units, layered in a disordered manner with an intersheet separation of 3.44 A at 300 K. The small size of the crystallites means that the sheets’ edges form a significant fraction of each material. The Pd(CN)2 nanocrystallites studied using total neutron diffraction are terminated by water and the Pt(CN)2 nanocrystallites by ammonia, in place of half of the terminal cyanide groups, thus maintaining charge neutrality. The neutron samples contain sheets of approximate dimensions 30 A x 30 A. For sheets of the size we describe, our structural models predict compositions of Pd(CN)2-xH2O and Pt(CN)2-yNH3 (x = y = 0.29). These values are in good agreement with those obtained from total neutron diffraction and thermal analysis, and are also supported by infrared and Raman spectroscopy measurements. It is also possible to prepare related compounds Pd(CN)2-pNH3 and Pt(CN)2-qH2O, in which the terminating groups are exchanged. Additional samples showing sheet sizes in the range 10 A x 10 A (y = 0.67) to 80 A x 80 A (p = q = 0.12), as determined by X-ray diffraction, have been prepared. The related mixed-metal phase, Pd1/2Pt1/2(CN)2-qH2O(q = 0.50), is also nanocrystalline (sheet size 15 A x 15 A). In all cases, the interiors of the sheets are isostructural with those found in Ni(CN)2. Removal of the final traces of water or ammonia by heating results in decomposition of the compounds to Pd and Pt metal, or in the case of the mixed-metal cyanide, the alloy, Pd1/2Pt1/2, making it impossible to prepare the simple cyanides, Pd(CN)2, Pt(CN)2 or Pd1/2Pt1/2(CN)2, by this method.