28 resultados para Standard method

em CentAUR: Central Archive University of Reading - UK


Relevância:

70.00% 70.00%

Publicador:

Resumo:

A method is proposed to determine the extent of degradation in the rumen involving a two-stage mathematical modeling process. In the first stage, a statistical model shifts (or maps) the gas accumulation profile obtained using a fecal inoculum to a ruminal gas profile. Then, a kinetic model determines the extent of degradation in the rumen from the shifted profile. The kinetic model is presented as a generalized mathematical function, allowing any one of a number of alternative equation forms to be selected. This method might allow the gas production technique to become an approach for determining extent of degradation in the rumen, decreasing the need for surgically modified animals while still maintaining the link with the animal. Further research is needed before the proposed methodology can be used as a standard method across a range of feeds.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Cross-contamination between cell lines is a longstanding and frequent cause of scientific misrepresentation. Estimates from national testing services indicate that up to 36% of cell lines are of a different origin or species to that claimed. To test a standard method of cell line authentication, 253 human cell lines from banks and research institutes worldwide were analyzed by short tandem repeat profiling. The short tandem repeat profile is a simple numerical code that is reproducible between laboratories, is inexpensive, and can provide an international reference standard for every cell line. If DNA profiling of cell lines is accepted and demanded internationally, scientific misrepresentation because of cross-contamination can be largely eliminated.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The 11-yr solar cycle temperature response to spectrally resolved solar irradiance changes and associated ozone changes is calculated using a fixed dynamical heating (FDH) model. Imposed ozone changes are from satellite observations, in contrast to some earlier studies. A maximum of 1.6 K is found in the equatorial upper stratosphere and a secondary maximum of 0.4 K in the equatorial lower stratosphere, forming a double peak in the vertical. The upper maximum is primarily due to the irradiance changes while the lower maximum is due to the imposed ozone changes. The results compare well with analyses using the 40-yr ECMWF Re-Analysis (ERA-40) and NCEP/NCAR datasets. The equatorial lower stratospheric structure is reproduced even though, by definition, the FDH calculations exclude dynamically driven temperature changes, suggesting an important role for an indirect dynamical effect through ozone redistribution. The results also suggest that differences between the Stratospheric Sounding Unit (SSU)/Microwave Sounding Unit (MSU) and ERA-40 estimates of the solar cycle signal can be explained by the poor vertical resolution of the SSU/MSU measurements. The adjusted radiative forcing of climate change is also investigated. The forcing due to irradiance changes was 0.14 W m−2, which is only 78% of the value obtained by employing the standard method of simple scaling of the total solar irradiance (TSI) change. The difference arises because much of the change in TSI is at wavelengths where ozone absorbs strongly. The forcing due to the ozone change was only 0.004 W m−2 owing to strong compensation between negative shortwave and positive longwave forcings.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We consider the comparison of two formulations in terms of average bioequivalence using the 2 × 2 cross-over design. In a bioequivalence study, the primary outcome is a pharmacokinetic measure, such as the area under the plasma concentration by time curve, which is usually assumed to have a lognormal distribution. The criterion typically used for claiming bioequivalence is that the 90% confidence interval for the ratio of the means should lie within the interval (0.80, 1.25), or equivalently the 90% confidence interval for the differences in the means on the natural log scale should be within the interval (-0.2231, 0.2231). We compare the gold standard method for calculation of the sample size based on the non-central t distribution with those based on the central t and normal distributions. In practice, the differences between the various approaches are likely to be small. Further approximations to the power function are sometimes used to simplify the calculations. These approximations should be used with caution, because the sample size required for a desirable level of power might be under- or overestimated compared to the gold standard method. However, in some situations the approximate methods produce very similar sample sizes to the gold standard method. Copyright © 2005 John Wiley & Sons, Ltd.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Nitrogen adsorption on carbon nanotubes is wide- ly studied because nitrogen adsorption isotherm measurement is a standard method applied for porosity characterization. A further reason is that carbon nanotubes are potential adsorbents for separation of nitrogen from oxygen in air. The study presented here describes the results of GCMC simulations of nitrogen (three site model) adsorption on single and multi walled closed nanotubes. The results obtained are described by a new adsorption isotherm model proposed in this study. The model can be treated as the tube analogue of the GAB isotherm taking into account the lateral adsorbate-adsorbate interactions. We show that the model describes the simulated data satisfactorily. Next this new approach is applied for a description of experimental data measured on different commercially available (and characterized using HRTEM) carbon nanotubes. We show that generally a quite good fit is observed and therefore it is suggested that the observed mechanism of adsorption in the studied materials is mainly determined by adsorption on tubes separated at large distances, so the tubes behave almost independently.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The dynamics of Northern Hemisphere major midwinter stratospheric sudden warmings (SSWs) are examined using transient climate change simulations from the Canadian Middle Atmosphere Model (CMAM). The simulated SSWs show good overall agreement with reanalysis data in terms of composite structure, statistics, and frequency. Using observed or model sea surface temperatures (SSTs) is found to make no significant difference to the SSWs, indicating that the use of model SSTs in the simulations extending into the future is not an issue. When SSWs are defined by the standard (wind based) definition, an absolute criterion, their frequency is found to increase by;60% by the end of this century, in conjunction with a;25% decrease in their temperature amplitude. However, when a relative criterion based on the northern annular mode index is used to define the SSWs, no future increase in frequency is found. The latter is consistent with the fact that the variance of 100-hPa daily heat flux anomalies is unaffected by climate change. The future increase in frequency of SSWs using the standard method is a result of the weakened climatological mean winds resulting from climate change, which make it easier for the SSW criterion to be met. A comparison of winters with and without SSWs reveals that the weakening of the climatological westerlies is not a result of SSWs. The Brewer–Dobson circulation is found to be stronger by ;10% during winters with SSWs, which is a value that does not change significantly in the future.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

An analysis method for diffusion tensor (DT) magnetic resonance imaging data is described, which, contrary to the standard method (multivariate fitting), does not require a specific functional model for diffusion-weighted (DW) signals. The method uses principal component analysis (PCA) under the assumption of a single fibre per pixel. PCA and the standard method were compared using simulations and human brain data. The two methods were equivalent in determining fibre orientation. PCA-derived fractional anisotropy and DT relative anisotropy had similar signal-to-noise ratio (SNR) and dependence on fibre shape. PCA-derived mean diffusivity had similar SNR to the respective DT scalar, and it depended on fibre anisotropy. Appropriate scaling of the PCA measures resulted in very good agreement between PCA and DT maps. In conclusion, the assumption of a specific functional model for DW signals is not necessary for characterization of anisotropic diffusion in a single fibre.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This study describes a simple technique that improves a recently developed 3D sub-diffraction imaging method based on three-photon absorption of commercially available quantum dots. The method combines imaging of biological samples via tri-exciton generation in quantum dots with deconvolution and spectral multiplexing, resulting in a novel approach for multi-color imaging of even thick biological samples at a 1.4 to 1.9-fold better spatial resolution. This approach is realized on a conventional confocal microscope equipped with standard continuous-wave lasers. We demonstrate the potential of multi-color tri-exciton imaging of quantum dots combined with deconvolution on viral vesicles in lentivirally transduced cells as well as intermediate filaments in three-dimensional clusters of mouse-derived neural stem cells (neurospheres) and dense microtubuli arrays in myotubes formed by stacks of differentiated C2C12 myoblasts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider the problem of scattering of a time-harmonic acoustic incident plane wave by a sound soft convex polygon. For standard boundary or finite element methods, with a piecewise polynomial approximation space, the computational cost required to achieve a prescribed level of accuracy grows linearly with respect to the frequency of the incident wave. Recently Chandler–Wilde and Langdon proposed a novel Galerkin boundary element method for this problem for which, by incorporating the products of plane wave basis functions with piecewise polynomials supported on a graded mesh into the approximation space, they were able to demonstrate that the number of degrees of freedom required to achieve a prescribed level of accuracy grows only logarithmically with respect to the frequency. Here we propose a related collocation method, using the same approximation space, for which we demonstrate via numerical experiments a convergence rate identical to that achieved with the Galerkin scheme, but with a substantially reduced computational cost.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we consider the problem of time-harmonic acoustic scattering in two dimensions by convex polygons. Standard boundary or finite element methods for acoustic scattering problems have a computational cost that grows at least linearly as a function of the frequency of the incident wave. Here we present a novel Galerkin boundary element method, which uses an approximation space consisting of the products of plane waves with piecewise polynomials supported on a graded mesh, with smaller elements closer to the corners of the polygon. We prove that the best approximation from the approximation space requires a number of degrees of freedom to achieve a prescribed level of accuracy that grows only logarithmically as a function of the frequency. Numerical results demonstrate the same logarithmic dependence on the frequency for the Galerkin method solution. Our boundary element method is a discretization of a well-known second kind combined-layer-potential integral equation. We provide a proof that this equation and its adjoint are well-posed and equivalent to the boundary value problem in a Sobolev space setting for general Lipschitz domains.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we consider the 2D Dirichlet boundary value problem for Laplace’s equation in a non-locally perturbed half-plane, with data in the space of bounded and continuous functions. We show uniqueness of solution, using standard Phragmen-Lindelof arguments. The main result is to propose a boundary integral equation formulation, to prove equivalence with the boundary value problem, and to show that the integral equation is well posed by applying a recent partial generalisation of the Fredholm alternative in Arens et al [J. Int. Equ. Appl. 15 (2003) pp. 1-35]. This then leads to an existence proof for the boundary value problem. Keywords. Boundary integral equation method, Water waves, Laplace’s

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Stable isotopic characterization of chlorine in chlorinated aliphatic pollution is potentially very valuable for risk assessment and monitoring remediation or natural attenuation. The approach has been underused because of the complexity of analysis and the time it takes. We have developed a new method that eliminates sample preparation. Gas chromatography produces individually eluted sample peaks for analysis. The He carrier gas is mixed with Ar and introduced directly into the torch of a multicollector ICPMS. The MC-ICPMS is run at a high mass resolution of >= 10 000 to eliminate interference of mass 37 ArH with Cl. The standardization approach is similar to that for continuous flow stable isotope analysis in which sample and reference materials are measured successively. We have measured PCE relative to a laboratory TCE standard mixed with the sample. Solvent samples of 200 nmol to 1.3 mu mol ( 24- 165 mu g of Cl) were measured. The PCE gave the same value relative to the TCE as measured by the conventional method with a precision of 0.12% ( 2 x standard error) but poorer precision for the smaller samples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is a growing interest in using stochastic parametrizations in numerical weather and climate prediction models. Previously, Palmer (2001) outlined the issues that give rise to the need for a stochastic parametrization and the forms such a parametrization could take. In this article a method is presented that uses a comparison between a standard-resolution version and a high-resolution version of the same model to gain information relevant for a stochastic parametrization in that model. A correction term that could be used in a stochastic parametrization is derived from the thermodynamic equations of both models. The origin of the components of this term is discussed. It is found that the component related to unresolved wave-wave interactions is important and can act to compensate for large parametrized tendencies. The correction term is not proportional to the parametrized tendency. Finally, it is explained how the correction term could be used to give information about the shape of the random distribution to be used in a stochastic parametrization. Copyright © 2009 Royal Meteorological Society

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper aims to summarise the current performance of ozone data assimilation (DA) systems, to show where they can be improved, and to quantify their errors. It examines 11 sets of ozone analyses from 7 different DA systems. Two are numerical weather prediction (NWP) systems based on general circulation models (GCMs); the other five use chemistry transport models (CTMs). The systems examined contain either linearised or detailed ozone chemistry, or no chemistry at all. In most analyses, MIPAS (Michelson Interferometer for Passive Atmospheric Sounding) ozone data are assimilated; two assimilate SCIAMACHY (Scanning Imaging Absorption Spectrometer for Atmospheric Chartography) observations instead. Analyses are compared to independent ozone observations covering the troposphere, stratosphere and lower mesosphere during the period July to November 2003. Biases and standard deviations are largest, and show the largest divergence between systems, in the troposphere, in the upper-troposphere/lower-stratosphere, in the upper-stratosphere and mesosphere, and the Antarctic ozone hole region. However, in any particular area, apart from the troposphere, at least one system can be found that agrees well with independent data. In general, none of the differences can be linked to the assimilation technique (Kalman filter, three or four dimensional variational methods, direct inversion) or the system (CTM or NWP system). Where results diverge, a main explanation is the way ozone is modelled. It is important to correctly model transport at the tropical tropopause, to avoid positive biases and excessive structure in the ozone field. In the southern hemisphere ozone hole, only the analyses which correctly model heterogeneous ozone depletion are able to reproduce the near-complete ozone destruction over the pole. In the upper-stratosphere and mesosphere (above 5 hPa), some ozone photochemistry schemes caused large but easily remedied biases. The diurnal cycle of ozone in the mesosphere is not captured, except by the one system that includes a detailed treatment of mesospheric chemistry. These results indicate that when good observations are available for assimilation, the first priority for improving ozone DA systems is to improve the models. The analyses benefit strongly from the good quality of the MIPAS ozone observations. Using the analyses as a transfer standard, it is seen that MIPAS is similar to 5% higher than HALOE (Halogen Occultation Experiment) in the mid and upper stratosphere and mesosphere (above 30 hPa), and of order 10% higher than ozonesonde and HALOE in the lower stratosphere (100 hPa to 30 hPa). Analyses based on SCIAMACHY total column are almost as good as the MIPAS analyses; analyses based on SCIAMACHY limb profiles are worse in some areas, due to problems in the SCIAMACHY retrievals.