31 resultados para Basophil Degranulation Test -- methods
em CentAUR: Central Archive University of Reading - UK
Resumo:
We develop a new multiwave version of the range test for shape reconstruction in inverse scattering theory. The range test [R. Potthast, et al., A ‘range test’ for determining scatterers with unknown physical properties, Inverse Problems 19(3) (2003) 533–547] has originally been proposed to obtain knowledge about an unknown scatterer when the far field pattern for only one plane wave is given. Here, we extend the method to the case of multiple waves and show that the full shape of the unknown scatterer can be reconstructed. We further will clarify the relation between the range test methods, the potential method [A. Kirsch, R. Kress, On an integral equation of the first kind in inverse acoustic scattering, in: Inverse Problems (Oberwolfach, 1986), Internationale Schriftenreihe zur Numerischen Mathematik, vol. 77, Birkhäuser, Basel, 1986, pp. 93–102] and the singular sources method [R. Potthast, Point sources and multipoles in inverse scattering theory, Habilitation Thesis, Göttingen, 1999]. In particular, we propose a new version of the Kirsch–Kress method using the range test and a new approach to the singular sources method based on the range test and potential method. Numerical examples of reconstructions for all four methods are provided.
Resumo:
The EU-funded research project ALARM will develop and test methods and protocols for the assessment of large-scale environmental risks in order to minimise negative human impacts. Research focuses on the assessment and forecast of changes in biodiversity and in the structure, function, and dynamics of ecosystems. This includes the relationships between society, the economy and biodiversity.
Resumo:
The applications of rheology to the main processes encountered during breadmaking (mixing, sheeting, fermentation and baking) are reviewed. The most commonly used rheological test methods and their relationships to product functionality are reviewed. It is shown that the most commonly used method for rheological testing of doughs, shear oscillation dynamic rheology, is generally used under deformation conditions inappropriate for breadmaking and shows little relationship with end-use performance. The frequency range used in conventional shear oscillation tests is limited to the plateau region, which is insensitive to changes in the HMW glutenin polymers thought to be responsible for variations in baking quality. The appropriate deformation conditions can be accessed either by long-time creep or relaxation measurements, or by large deformation extensional measurements at low strain rates and elevated temperatures. Molecular size and structure of the gluten polymers that make up the major structural components of wheat are related to their rheological properties via modern polymer rheology concepts. Interactions between polymer chain entanglements and branching are seen to be the key mechanisms determining the rheology of HMW polymers. Recent work confirms the observation that the dynamic shear plateau modulus is essentially independent of variations in MW of glutens amongst wheat varieties of varying baking performance and also that it is not the size of the soluble glutenin polymers, but the secondary structural and rheological properties of the insoluble polymer fraction that are mainly responsible for variations in baking performance. Extensional strain hardening has been shown to be a sensitive indicator of entanglements and long-chain branching in HMW polymers, and is well related to baking performance of bread doughs. The Considere failure criterion for instability in extension of polymers defines a region below which bubble walls become unstable, and predicts that when strain hardening falls below a value of around 1, bubble walls are no longer stable and coalesce rapidly, resulting in loss of gas retention and lower volume and texture. Strain hardening in doughs has been shown to reach this value at increasingly higher temperatures for better breadmaking varieties and is directly related to bubble stability and baking performance. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
Warfarin resistance was first discovered among Norway rat (Rattus norvegicus) populations in Scotland in 1958 and further reports of resistance, both in this species and in others, soon followed from other parts of Europe and the United States. Researchers quickly defined the practical impact of these resistance phenomena and developed robust methods by which to monitor their spread. These tasks were relatively simple because of the high degree of immunity to warfarin conferred by the resistance genes. Later, the second generation anticoagulants were introduced to control rodents resistant to the warfarin-like compounds, but resistance to difenacoum, bromadiolone and brodifacoum is now reported in certain localities in Europe and elsewhere. However, the adoption of test methods designed initially for use with the first generation compounds to identify resistance to compounds of the second generation has led to some practical difficulties in conducting tests and in establishing meaningful resistance baselines. In particular, the results of certain test methodologies are difficult to interpret in terms of the likely impact on practical control treatments of the resistance phenomena they seek to identify. This paper defines rodenticide resistance in the context of both first and second generation anticoagulants. It examines the advantages and disadvantages of existing laboratory and field methods used in the detection of rodent populations resistant to anticoagulants and proposes some improvements in the application of these techniques and in the interpretation of their results.
Resumo:
In this article, we use the no-response test idea, introduced in Luke and Potthast (2003) and Potthast (Preprint) and the inverse obstacle problem, to identify the interface of the discontinuity of the coefficient gamma of the equation del (.) gamma(x)del + c(x) with piecewise regular gamma and bounded function c(x). We use infinitely many Cauchy data as measurement and give a reconstructive method to localize the interface. We will base this multiwave version of the no-response test on two different proofs. The first one contains a pointwise estimate as used by the singular sources method. The second one is built on an energy (or an integral) estimate which is the basis of the probe method. As a conclusion of this, the probe and the singular sources methods are equivalent regarding their convergence and the no-response test can be seen as a unified framework for these methods. As a further contribution, we provide a formula to reconstruct the values of the jump of gamma(x), x is an element of partial derivative D at the boundary. A second consequence of this formula is that the blow-up rate of the indicator functions of the probe and singular sources methods at the interface is given by the order of the singularity of the fundamental solution.
Resumo:
The goal of the review is to provide a state-of-the-art survey on sampling and probe methods for the solution of inverse problems. Further, a configuration approach to some of the problems will be presented. We study the concepts and analytical results for several recent sampling and probe methods. We will give an introduction to the basic idea behind each method using a simple model problem and then provide some general formulation in terms of particular configurations to study the range of the arguments which are used to set up the method. This provides a novel way to present the algorithms and the analytic arguments for their investigation in a variety of different settings. In detail we investigate the probe method (Ikehata), linear sampling method (Colton-Kirsch) and the factorization method (Kirsch), singular sources Method (Potthast), no response test (Luke-Potthast), range test (Kusiak, Potthast and Sylvester) and the enclosure method (Ikehata) for the solution of inverse acoustic and electromagnetic scattering problems. The main ideas, approaches and convergence results of the methods are presented. For each method, we provide a historical survey about applications to different situations.
Resumo:
In this paper we consider the scattering of a plane acoustic or electromagnetic wave by a one-dimensional, periodic rough surface. We restrict the discussion to the case when the boundary is sound soft in the acoustic case, perfectly reflecting with TE polarization in the EM case, so that the total field vanishes on the boundary. We propose a uniquely solvable first kind integral equation formulation of the problem, which amounts to a requirement that the normal derivative of the Green's representation formula for the total field vanish on a horizontal line below the scattering surface. We then discuss the numerical solution by Galerkin's method of this (ill-posed) integral equation. We point out that, with two particular choices of the trial and test spaces, we recover the so-called SC (spectral-coordinate) and SS (spectral-spectral) numerical schemes of DeSanto et al., Waves Random Media, 8, 315-414 1998. We next propose a new Galerkin scheme, a modification of the SS method that we term the SS* method, which is an instance of the well-known dual least squares Galerkin method. We show that the SS* method is always well-defined and is optimally convergent as the size of the approximation space increases. Moreover, we make a connection with the classical least squares method, in which the coefficients in the Rayleigh expansion of the solution are determined by enforcing the boundary condition in a least squares sense, pointing out that the linear system to be solved in the SS* method is identical to that in the least squares method. Using this connection we show that (reflecting the ill-posed nature of the integral equation solved) the condition number of the linear system in the SS* and least squares methods approaches infinity as the approximation space increases in size. We also provide theoretical error bounds on the condition number and on the errors induced in the numerical solution computed as a result of ill-conditioning. Numerical results confirm the convergence of the SS* method and illustrate the ill-conditioning that arises.
Resumo:
Satellite observed data for flood events have been used to calibrate and validate flood inundation models, providing valuable information on the spatial extent of the flood. Improvements in the resolution of this satellite imagery have enabled indirect remote sensing of water levels by using an underlying LiDAR DEM to extract the water surface elevation at the flood margin. Further to comparison of the spatial extent, this now allows for direct comparison between modelled and observed water surface elevations. Using a 12.5m ERS-1 image of a flood event in 2006 on the River Dee, North Wales, UK, both of these data types are extracted and each assessed for their value in the calibration of flood inundation models. A LiDAR guided snake algorithm is used to extract an outline of the flood from the satellite image. From the extracted outline a binary grid of wet / dry cells is created at the same resolution as the model, using this the spatial extent of the modelled and observed flood can be compared using a measure of fit between the two binary patterns of flooding. Water heights are extracted using points at intervals of approximately 100m along the extracted outline, and the students T-test is used to compare modelled and observed water surface elevations. A LISFLOOD-FP model of the catchment is set up using LiDAR topographic data resampled to the 12.5m resolution of the satellite image, and calibration of the friction parameter in the model is undertaken using each of the two approaches. Comparison between the two approaches highlights the sensitivity of the spatial measure of fit to uncertainty in the observed data and the potential drawbacks of using the spatial extent when parts of the flood are contained by the topography.
Resumo:
We propose a novel method for scoring the accuracy of protein binding site predictions – the Binding-site Distance Test (BDT) score. Recently, the Matthews Correlation Coefficient (MCC) has been used to evaluate binding site predictions, both by developers of new methods and by the assessors for the community wide prediction experiment – CASP8. Whilst being a rigorous scoring method, the MCC does not take into account the actual 3D location of the predicted residues from the observed binding site. Thus, an incorrectly predicted site that is nevertheless close to the observed binding site will obtain an identical score to the same number of nonbinding residues predicted at random. The MCC is somewhat affected by the subjectivity of determining observed binding residues and the ambiguity of choosing distance cutoffs. By contrast the BDT method produces continuous scores ranging between 0 and 1, relating to the distance between the predicted and observed residues. Residues predicted close to the binding site will score higher than those more distant, providing a better reflection of the true accuracy of predictions. The CASP8 function predictions were evaluated using both the MCC and BDT methods and the scores were compared. The BDT was found to strongly correlate with the MCC scores whilst also being less susceptible to the subjectivity of defining binding residues. We therefore suggest that this new simple score is a potentially more robust method for future evaluations of protein-ligand binding site predictions.
Resumo:
A modified chlorophyll fluorescence technique was evaluated as a rapid diagnostic test of the susceptibility of wheat cultivars to chlorotoluron. Two winter wheat cultivars (Maris Huntsman and Mercia) exhibited differential response to the herbicide. All of the parameters of chlorophyll fluorescence examined were strongly influenced by herbicide concentration. Additionally, the procedure adopted here for the examination of winter wheat cultivar sensitivity to herbicide indicated that the area above the fluorescence induction curve and the ratio F-V/F-M are appropriate chlorophyll fluorescence parameters for detection of differential herbicide response between wheat cultivars. The potential use of this technique as an alternative to traditional methods of screening new winter wheat cultivars for their response to photosynthetic inhibitor herbicide is demonstrated here.
Resumo:
The proportional odds model provides a powerful tool for analysing ordered categorical data and setting sample size, although for many clinical trials its validity is questionable. The purpose of this paper is to present a new class of constrained odds models which includes the proportional odds model. The efficient score and Fisher's information are derived from the profile likelihood for the constrained odds model. These results are new even for the special case of proportional odds where the resulting statistics define the Mann-Whitney test. A strategy is described involving selecting one of these models in advance, requiring assumptions as strong as those underlying proportional odds, but allowing a choice of such models. The accuracy of the new procedure and its power are evaluated.
Resumo:
This paper considers methods for testing for superiority or non-inferiority in active-control trials with binary data, when the relative treatment effect is expressed as an odds ratio. Three asymptotic tests for the log-odds ratio based on the unconditional binary likelihood are presented, namely the likelihood ratio, Wald and score tests. All three tests can be implemented straightforwardly in standard statistical software packages, as can the corresponding confidence intervals. Simulations indicate that the three alternatives are similar in terms of the Type I error, with values close to the nominal level. However, when the non-inferiority margin becomes large, the score test slightly exceeds the nominal level. In general, the highest power is obtained from the score test, although all three tests are similar and the observed differences in power are not of practical importance. Copyright (C) 2007 John Wiley & Sons, Ltd.
Resumo:
This paper presents a reappraisal of the blood clotting response (BCR) tests for anticoagulant rodenticides, and proposes a standardised methodology for identifying and quantifying physiological resistance in populations of rodent species. The standardisation is based on the International Normalised Ratio, which is standardised against a WHO international reference preparation of thromboplastin, and allows comparison of data obtained using different thromboplastin reagents. ne methodology is statistically sound, being based on the 50% response, and has been validated against the Norway rat (Rattus norvegicus) and the house mouse (Mus domesticus). Susceptibility baseline data are presented for warfarin, diphacinone, chlorophacinone and coumatetralyl against the Norway rat, and for bromadiolone, difenacoum, difethialone, flocoumafen and brodifacoum against the Norway rat and the house mouse. A 'test dose' of twice the ED50 can be used for initial identification of resistance, and will provide a similar level of information to previously published methods. Higher multiples of the ED50 can be used to assess the resistance factor, and to predict the likely impact on field control.
Resumo:
A modified chlorophyll fluorescence technique was evaluated as a rapid diagnostic test of the susceptibility of wheat cultivars to chlorotoluron. Two winter wheat cultivars (Maris Huntsman and Mercia) exhibited differential response to the herbicide. All of the parameters of chlorophyll fluorescence examined were strongly influenced by herbicide concentration. Additionally, the procedure adopted here for the examination of winter wheat cultivar sensitivity to herbicide indicated that the area above the fluorescence induction curve and the ratio F-V/F-M are appropriate chlorophyll fluorescence parameters for detection of differential herbicide response between wheat cultivars. The potential use of this technique as an alternative to traditional methods of screening new winter wheat cultivars for their response to photosynthetic inhibitor herbicide is demonstrated here.
Resumo:
Aim. The aim of this study was to investigate whether a single soccer specific fitness test (SSFT) could differentiate between highly trained and recreationally active soccer players in selected test performance indicators. Methods. Subjects: 13 Academy Scholars (AS) from a professional soccer club and 10 Recreational Players (RP) agreed to participate in this study. Test 1-(V)over dotO(2) max was estimated from a progressive shuttle run test to exhaustion. Test 2-The SSFT was controlled by an automated procedure and alternated between walking, sprinting, jogging and cruise running speeds. Three activity blocks (1A, 2A and 3A) were separated by 3 min rest periods in which blood lactate samples were drawn. The 3 blocks of activity (Part A) were followed by 10 min of exercise at speeds alternating between jogging and cruise running (Part B). Results. Estimated (V)over dotO(2) max did not significantly differ between groups, although a trend for a higher aerobic capacity was evident in AS (p<0.09). Exercising heart rates did not differ between AS and RP, however, recovery heart rates taken from the 3 min rest periods were significantly lower in AS compared with RP following blocks 1A (124.65 b(.)min(-1) +/-7.73 and 133.98 b(.)min(-1) +/-6.63), (p<0.05) and 3A (129.91 b.min(-1) +/-10.21 and 138.85 b.min(-1) +/-8.70), (p<0.01). Blood lactate concentrations were significantly elevated in AS in comparison to RP following blocks 2A (6.91 mmol(.)l(-1) +/-2.67 and 4.74 mmol(.)l(-1) +/-1.28) and 3A (7.18 mmol(.)l(-1) +/-2.97 and 4.88 mmol(.)l(-1) +/-1.50), (p<0.05). AS sustained significantly faster average sprint times in block 3A compared with RP (3.18 sec +/-0.12 and 3.31 sec +/-0.12), (p<0.05). Conclusion. The results of this study show that highly trained soccer players are able to sustain, and more quickly recover from, high intensity intermittent exercise.