930 resultados para four-point probe method
Resumo:
The correlated k-distribution (CKD) method is widely used in the radiative transfer schemes of atmospheric models and involves dividing the spectrum into a number of bands and then reordering the gaseous absorption coefficients within each one. The fluxes and heating rates for each band may then be computed by discretizing the reordered spectrum into of order 10 quadrature points per major gas and performing a monochromatic radiation calculation for each point. In this presentation it is shown that for clear-sky longwave calculations, sufficient accuracy for most applications can be achieved without the need for bands: reordering may be performed on the entire longwave spectrum. The resulting full-spectrum correlated k (FSCK) method requires significantly fewer monochromatic calculations than standard CKD to achieve a given accuracy. The concept is first demonstrated by comparing with line-by-line calculations for an atmosphere containing only water vapor, in which it is shown that the accuracy of heating-rate calculations improves approximately in proportion to the square of the number of quadrature points. For more than around 20 points, the root-mean-squared error flattens out at around 0.015 K/day due to the imperfect rank correlation of absorption spectra at different pressures in the profile. The spectral overlap of m different gases is treated by considering an m-dimensional hypercube where each axis corresponds to the reordered spectrum of one of the gases. This hypercube is then divided up into a number of volumes, each approximated by a single quadrature point, such that the total number of quadrature points is slightly fewer than the sum of the number that would be required to treat each of the gases separately. The gaseous absorptions for each quadrature point are optimized such that they minimize a cost function expressing the deviation of the heating rates and fluxes calculated by the FSCK method from line-by-line calculations for a number of training profiles. This approach is validated for atmospheres containing water vapor, carbon dioxide, and ozone, in which it is found that in the troposphere and most of the stratosphere, heating-rate errors of less than 0.2 K/day can be achieved using a total of 23 quadrature points, decreasing to less than 0.1 K/day for 32 quadrature points. It would be relatively straightforward to extend the method to include other gases.
Resumo:
The correlated k-distribution (CKD) method is widely used in the radiative transfer schemes of atmospheric models, and involves dividing the spectrum into a number of bands and then reordering the gaseous absorption coefficients within each one. The fluxes and heating rates for each band may then be computed by discretizing the reordered spectrum into of order 10 quadrature points per major gas, and performing a pseudo-monochromatic radiation calculation for each point. In this paper it is first argued that for clear-sky longwave calculations, sufficient accuracy for most applications can be achieved without the need for bands: reordering may be performed on the entire longwave spectrum. The resulting full-spectrum correlated k (FSCK) method requires significantly fewer pseudo-monochromatic calculations than standard CKD to achieve a given accuracy. The concept is first demonstrated by comparing with line-by-line calculations for an atmosphere containing only water vapor, in which it is shown that the accuracy of heating-rate calculations improves approximately in proportion to the square of the number of quadrature points. For more than around 20 points, the root-mean-squared error flattens out at around 0.015 K d−1 due to the imperfect rank correlation of absorption spectra at different pressures in the profile. The spectral overlap of m different gases is treated by considering an m-dimensional hypercube where each axis corresponds to the reordered spectrum of one of the gases. This hypercube is then divided up into a number of volumes, each approximated by a single quadrature point, such that the total number of quadrature points is slightly fewer than the sum of the number that would be required to treat each of the gases separately. The gaseous absorptions for each quadrature point are optimized such they minimize a cost function expressing the deviation of the heating rates and fluxes calculated by the FSCK method from line-by-line calculations for a number of training profiles. This approach is validated for atmospheres containing water vapor, carbon dioxide and ozone, in which it is found that in the troposphere and most of the stratosphere, heating-rate errors of less than 0.2 K d−1 can be achieved using a total of 23 quadrature points, decreasing to less than 0.1 K d−1 for 32 quadrature points. It would be relatively straightforward to extend the method to include other gases.
Resumo:
This paper seeks to illustrate the point that physical inconsistencies between thermodynamics and dynamics usually introduce nonconservative production/destruction terms in the local total energy balance equation in numerical ocean general circulation models (OGCMs). Such terms potentially give rise to undesirable forces and/or diabatic terms in the momentum and thermodynamic equations, respectively, which could explain some of the observed errors in simulated ocean currents and water masses. In this paper, a theoretical framework is developed to provide a practical method to determine such nonconservative terms, which is illustrated in the context of a relatively simple form of the hydrostatic Boussinesq primitive equation used in early versions of OGCMs, for which at least four main potential sources of energy nonconservation are identified; they arise from: (1) the “hanging” kinetic energy dissipation term; (2) assuming potential or conservative temperature to be a conservative quantity; (3) the interaction of the Boussinesq approximation with the parameterizations of turbulent mixing of temperature and salinity; (4) some adiabatic compressibility effects due to the Boussinesq approximation. In practice, OGCMs also possess spurious numerical energy sources and sinks, but they are not explicitly addressed here. Apart from (1), the identified nonconservative energy sources/sinks are not sign definite, allowing for possible widespread cancellation when integrated globally. Locally, however, these terms may be of the same order of magnitude as actual energy conversion terms thought to occur in the oceans. Although the actual impact of these nonconservative energy terms on the overall accuracy and physical realism of the oceans is difficult to ascertain, an important issue is whether they could impact on transient simulations, and on the transition toward different circulation regimes associated with a significant reorganization of the different energy reservoirs. Some possible solutions for improvement are examined. It is thus found that the term (2) can be substantially reduced by at least one order of magnitude by using conservative temperature instead of potential temperature. Using the anelastic approximation, however, which was initially thought as a possible way to greatly improve the accuracy of the energy budget, would only marginally reduce the term (4) with no impact on the terms (1), (2) and (3).
Resumo:
The problem of the appropriate distribution of forces among the fingers of a four-fingered robot hand is addressed. The finger-object interactions are modelled as point frictional contacts, hence the system is indeterminate and an optimal solution is required for controlling forces acting on an object. A fast and efficient method for computing the grasping and manipulation forces is presented, where computation has been based on using the true model of the nonlinear frictional cone of contact. Results are compared with previously employed methods of linearizing the cone constraints and minimizing the internal forces.
Resumo:
Many well-established statistical methods in genetics were developed in a climate of severe constraints on computational power. Recent advances in simulation methodology now bring modern, flexible statistical methods within the reach of scientists having access to a desktop workstation. We illustrate the potential advantages now available by considering the problem of assessing departures from Hardy-Weinberg (HW) equilibrium. Several hypothesis tests of HW have been established, as well as a variety of point estimation methods for the parameter which measures departures from HW under the inbreeding model. We propose a computational, Bayesian method for assessing departures from HW, which has a number of important advantages over existing approaches. The method incorporates the effects-of uncertainty about the nuisance parameters--the allele frequencies--as well as the boundary constraints on f (which are functions of the nuisance parameters). Results are naturally presented visually, exploiting the graphics capabilities of modern computer environments to allow straightforward interpretation. Perhaps most importantly, the method is founded on a flexible, likelihood-based modelling framework, which can incorporate the inbreeding model if appropriate, but also allows the assumptions of the model to he investigated and, if necessary, relaxed. Under appropriate conditions, information can be shared across loci and, possibly, across populations, leading to more precise estimation. The advantages of the method are illustrated by application both to simulated data and to data analysed by alternative methods in the recent literature.
Resumo:
Four new Cu(II)-azido complexes of formula [CuL(N-3)] (1), [CuL(N-3)](2) (2), [Cu7L2(N-3)(12)](n) (3), and [Cu2L(dmen)-(N-3)(3)](n) (4) (dmen = N,N-dimethylethylenediamine) have been synthesized using the same tridentate Schiff base ligand HL (2-[1-(2-dimethylaminoethylimino)ethyl]phenol, the condensation product of dmen and 2-hydroxyacetophenone). The four compounds have been characterized by X-ray structural analyses and variable-temperature magnetic susceptibility measurements. Complex 1 is mononuclear, whereas 2 is a single mu-1,1 azido-bridged dinuclear compound. The polymeric compound 3 possesses a 2D structure in which the Cu(II) ions are linked by phenoxo oxygen atoms and two different azide bridges (mu-1,1 and mu-1,1,3). The structure of complex 4 is a double helix in which two mu-1,3-azido-bridged alternating one-dimensional helical chains of CuL(N-3) and Cu(dmen)(N-3)(2) are joined together by weak mu-1,1 azido bridges and H-bonds. The complexes interconvert in solution and can be obtained in pure form by carefully controlling the conditions. The magnetic properties of compounds 1 and 2 show the presence of very weak antiferromagnetic exchange interactions mediated by a ligand pi overlap (J = -1.77) and by an asymmetric 1,1-N-3 bridge (J = -1.97 cm(-1)), respectively. Compound 3 presents, from the magnetic point of view, a decorated chain structure with both ferro- and antiferromagnetic interactions. Compound 4 is an alternating helicoidal chain with two weak antiferromagnetic exchange interactions (J -1.35 and -2.64 cm(-1)).
Resumo:
We explicitly tested for the first time the ‘environmental specificity’ of traditional 16S rRNAtargeted fluorescence in situ hybridization (FISH) through comparison of the bacterial diversity actually targeted in the environment with the diversity that should be exactly targeted (i.e. without mismatches) according to in silico analysis. To do this, we exploited advances in modern Flow Cytometry that enabled improved detection and therefore sorting of sub-micron-sized particles and used probe PSE1284 (designed to target Pseudomonads) applied to Lolium perenne rhizosphere soil as our test system. The 6-carboxyfluorescein (6-FAM)-PSE1284-hybridised population, defined as displaying enhanced green fluorescence in Flow Cytometry, represented 3.51±1.28% of the total detected population when corrected using a nonsense (NON-EUB338) probe control. Analysis of 16S rRNA gene libraries constructed from Fluorescence Activated Cell Sorted (FACS) -recovered fluorescent populations (n=3), revealed that 98.5% (Pseudomonas spp. comprised 68.7% and Burkholderia spp. 29.8%) of the total sorted population was specifically targeted as evidenced by the homology of the 16S rRNA sequences to the probe sequence. In silico evaluation of probe PSE1284 with the use of RDP-10 probeMatch justified the existence of Burkholderia spp. among the sorted cells. The lack of novelty in Pseudomonas spp. sequences uncovered was notable, probably reflecting the well-studied nature of this functionally important genus. To judge the diversity recorded within the FACS-sorted population, rarefaction and DGGE analysis were used to evaluate, respectively, the proportion of Pseudomonas diversity uncovered by the sequencing effort and the representativeness of the Nycodenz® method for the extraction of bacterial cells from soil.
Resumo:
Unless the benefits to society of measures to protect and improve the welfare of animals are made transparent by means of their valuation they are likely to go unrecognised and cannot easily be weighed against the costs of such measures as required, for example, by policy-makers. A simple single measure scoring system, based on the Welfare Quality® index, is used, together with a choice experiment economic valuation method, to estimate the value that people place on improvements to the welfare of different farm animal species measured on a continuous (0-100) scale. Results from using the method on a survey sample of some 300 people show that it is able to elicit apparently credible values. The survey found that 96% of respondents thought that we have a moral obligation to safeguard the welfare of animals and that over 72% were concerned about the way farm animals are treated. Estimated mean annual willingness to pay for meat from animals with improved welfare of just one point on the scale was £5.24 for beef cattle, £4.57 for pigs and £5.10 for meat chickens. Further development of the method is required to capture the total economic value of animal welfare benefits. Despite this, the method is considered a practical means for obtaining economic values that can be used in the cost-benefit appraisal of policy measures intended to improve the welfare of animals.
Resumo:
We study initial-boundary value problems for linear evolution equations of arbitrary spatial order, subject to arbitrary linear boundary conditions and posed on a rectangular 1-space, 1-time domain. We give a new characterisation of the boundary conditions that specify well-posed problems using Fokas' transform method. We also give a sufficient condition guaranteeing that the solution can be represented using a series. The relevant condition, the analyticity at infinity of certain meromorphic functions within particular sectors, is significantly more concrete and easier to test than the previous criterion, based on the existence of admissible functions.
Resumo:
A new incremental four-dimensional variational (4D-Var) data assimilation algorithm is introduced. The algorithm does not require the computationally expensive integrations with the nonlinear model in the outer loops. Nonlinearity is accounted for by modifying the linearization trajectory of the observation operator based on integrations with the tangent linear (TL) model. This allows us to update the linearization trajectory of the observation operator in the inner loops at negligible computational cost. As a result the distinction between inner and outer loops is no longer necessary. The key idea on which the proposed 4D-Var method is based is that by using Gaussian quadrature it is possible to get an exact correspondence between the nonlinear time evolution of perturbations and the time evolution in the TL model. It is shown that J-point Gaussian quadrature can be used to derive the exact adjoint-based observation impact equations and furthermore that it is straightforward to account for the effect of multiple outer loops in these equations if the proposed 4D-Var method is used. The method is illustrated using a three-level quasi-geostrophic model and the Lorenz (1996) model.
Resumo:
Specific traditional plate count method and real-time PCR systems based on SYBR Green I and TaqMan technologies using a specific primer pair and probe for amplification of iap-gene were used for quantitative assay of Listeria monocytogenes in seven decimal serial dilution series of nutrient broth and milk samples containing 1.58 to 1.58×107 cfu /ml and the real-time PCR methods were compared with the plate count method with respect to accuracy and sensitivity. In this study, the plate count method was performed using surface-plating of 0.1 ml of each sample on Palcam Agar. The lowest detectable level for this method was 1.58×10 cfu/ml for both nutrient broth and milk samples. Using purified DNA as a template for generation of standard curves, as few as four copies of the iap-gene could be detected per reaction with both real-time PCR assays, indicating that they were highly sensitive. When these real-time PCR assays were applied to quantification of L. monocytogenes in decimal serial dilution series of nutrient broth and milk samples, 3.16×10 to 3.16×105 copies per reaction (equals to 1.58×103 to 1.58×107 cfu/ml L. monocytogenes) were detectable. As logarithmic cycles, for Plate Count and both molecular assays, the quantitative results of the detectable steps were similar to the inoculation levels.
Resumo:
Shiga toxin producing Escherichia coli (STEC) strains are foodborne pathogens whose ability to produce Shiga toxin (Stx) is due to the integration of Stx-encoding lambdoid bacteriophage (Stx phage). Circulating, infective Stx phages are very difficult to isolate, purify and propagate such that there is no information on their genetic composition and properties. Here we describe a novel approach that exploits the phage's ability to infect their host and form a lysogen, thus enabling purification of Stx phages by a series of sequential lysogen isolation and induction steps. A total of 15 Stx phages were rigorously purified from water samples in this way, classified by TEM and genotyped using a PCR-based multi-loci characterisation system. Each phage possessed only one variant of each target gene type, thus confirming its purity, with 9 of the 15 phages possessing a short tail-spike gene and identified by TEM as Podoviridae. The remaining 6 phages possessed long tails, four of which appeared to be contractile in nature (Myoviridae) and two of which were morphologically very similar to bacteriophage lambda (Siphoviridae).
Resumo:
With the introduction of new observing systems based on asynoptic observations, the analysis problem has changed in character. In the near future we may expect that a considerable part of meteorological observations will be unevenly distributed in four dimensions, i.e. three dimensions in space and one in time. The term analysis, or objective analysis in meteorology, means the process of interpolating observed meteorological observations from unevenly distributed locations to a network of regularly spaced grid points. Necessitated by the requirement of numerical weather prediction models to solve the governing finite difference equations on such a grid lattice, the objective analysis is a three-dimensional (or mostly two-dimensional) interpolation technique. As a consequence of the structure of the conventional synoptic network with separated data-sparse and data-dense areas, four-dimensional analysis has in fact been intensively used for many years. Weather services have thus based their analysis not only on synoptic data at the time of the analysis and climatology, but also on the fields predicted from the previous observation hour and valid at the time of the analysis. The inclusion of the time dimension in objective analysis will be called four-dimensional data assimilation. From one point of view it seems possible to apply the conventional technique on the new data sources by simply reducing the time interval in the analysis-forecasting cycle. This could in fact be justified also for the conventional observations. We have a fairly good coverage of surface observations 8 times a day and several upper air stations are making radiosonde and radiowind observations 4 times a day. If we have a 3-hour step in the analysis-forecasting cycle instead of 12 hours, which is applied most often, we may without any difficulties treat all observations as synoptic. No observation would thus be more than 90 minutes off time and the observations even during strong transient motion would fall within a horizontal mesh of 500 km * 500 km.
Resumo:
In addition to the Hamiltonian functional itself, non-canonical Hamiltonian dynamical systems generally possess integral invariants known as ‘Casimir functionals’. In the case of the Euler equations for a perfect fluid, the Casimir functionals correspond to the vortex topology, whose invariance derives from the particle-relabelling symmetry of the underlying Lagrangian equations of motion. In a recent paper, Vallis, Carnevale & Young (1989) have presented algorithms for finding steady states of the Euler equations that represent extrema of energy subject to given vortex topology, and are therefore stable. The purpose of this note is to point out a very general method for modifying any Hamiltonian dynamical system into an algorithm that is analogous to those of Vallis etal. in that it will systematically increase or decrease the energy of the system while preserving all of the Casimir invariants. By incorporating momentum into the extremization procedure, the algorithm is able to find steadily-translating as well as steady stable states. The method is applied to a variety of perfect-fluid systems, including Euler flow as well as compressible and incompressible stratified flow.
Resumo:
Sub-seasonal variability including equatorial waves significantly influence the dehydration and transport processes in the tropical tropopause layer (TTL). This study investigates the wave activity in the TTL in 7 reanalysis data sets (RAs; NCEP1, NCEP2, ERA40, ERA-Interim, JRA25, MERRA, and CFSR) and 4 chemistry climate models (CCMs; CCSRNIES, CMAM, MRI, and WACCM) using the zonal wave number-frequency spectral analysis method with equatorially symmetric-antisymmetric decomposition. Analyses are made for temperature and horizontal winds at 100 hPa in the RAs and CCMs and for outgoing longwave radiation (OLR), which is a proxy for convective activity that generates tropopause-level disturbances, in satellite data and the CCMs. Particular focus is placed on equatorial Kelvin waves, mixed Rossby-gravity (MRG) waves, and the Madden-Julian Oscillation (MJO). The wave activity is defined as the variance, i.e., the power spectral density integrated in a particular zonal wave number-frequency region. It is found that the TTL wave activities show significant difference among the RAs, ranging from ∼0.7 (for NCEP1 and NCEP2) to ∼1.4 (for ERA-Interim, MERRA, and CFSR) with respect to the averages from the RAs. The TTL activities in the CCMs lie generally within the range of those in the RAs, with a few exceptions. However, the spectral features in OLR for all the CCMs are very different from those in the observations, and the OLR wave activities are too low for CCSRNIES, CMAM, and MRI. It is concluded that the broad range of wave activity found in the different RAs decreases our confidence in their validity and in particular their value for validation of CCM performance in the TTL, thereby limiting our quantitative understanding of the dehydration and transport processes in the TTL.