30 resultados para Direct Strength Method and Experiments
Resumo:
Using the GlobAEROSOL-AATSR dataset, estimates of the instantaneous, clear-sky, direct aerosol radiative effect and radiative forcing have been produced for the year 2006. Aerosol Robotic Network sun-photometer measurements have been used to characterise the random and systematic error in the GlobAEROSOL product for 22 regions covering the globe. Representative aerosol properties for each region were derived from the results of a wide range of literature sources and, along with the de-biased GlobAEROSOL AODs, were used to drive an offline version of the Met Office unified model radiation scheme. In addition to the mean AOD, best-estimate run of the radiation scheme, a range of additional calculations were done to propagate uncertainty estimates in the AOD, optical properties, surface albedo and errors due to the temporal and spatial averaging of the AOD fields. This analysis produced monthly, regional estimates of the clear-sky aerosol radiative effect and its uncertainty, which were combined to produce annual, global mean values of (−6.7±3.9)Wm−2 at the top of atmosphere (TOA) and (−12±6)Wm−2 at the surface. These results were then used to give estimates of regional, clear-sky aerosol direct radiative forcing, using modelled pre-industrial AOD fields for the year 1750 calculated for the AEROCOM PRE experiment. However, as it was not possible to quantify the uncertainty in the pre-industrial aerosol loading, these figures can only be taken as indicative and their uncertainties as lower bounds on the likely errors. Although the uncertainty on aerosol radiative effect presented here is considerably larger than most previous estimates, the explicit inclusion of the major sources of error in the calculations suggest that they are closer to the true constraint on this figure from similar methodologies, and point to the need for more, improved estimates of both global aerosol loading and aerosol optical properties.
Resumo:
Payment cards are a useful device to measure subjects’ preferences for a good and especially their willingness to pay for it. Together with some other similar elicitation methods, payment cards are especially appropriate for both hypothetical and incentive-compatible valuations of a good; a property which has prompted many researchers to use them in studies comparing stated and revealed valuations. The Strategy Method (hereafter SM) is a method based on a similar principle as that of payment cards, but is aimed at eliciting a subject’s full profile of responses to each of the strategies available to the rival(s).
Resumo:
Similarities between the anatomies of living organisms are often used to draw conclusions regarding the ecology and behaviour of extinct animals. Several pterosaur taxa are postulated to have been skim-feeders based largely on supposed convergences of their jaw anatomy with that of the modern skimming bird, Rynchops spp. Using physical and mathematical models of Rynchops bills and pterosaur jaws, we show that skimming is considerably more energetically costly than previously thought for Rynchops and that pterosaurs weighing more than one kilogram would not have been able to skim at all. Furthermore, anatomical comparisons between the highly specialised skull of Rynchops and those of postulated skimming pterosaurs suggest that even smaller forms were poorly adapted for skim-feeding. Our results refute the hypothesis that some pterosaurs commonly used skimming as a foraging method and illustrate the pitfalls involved in extrapolating from limited morphological convergence.
Resumo:
Details about the parameters of kinetic systems are crucial for progress in both medical and industrial research, including drug development, clinical diagnosis and biotechnology applications. Such details must be collected by a series of kinetic experiments and investigations. The correct design of the experiment is essential to collecting data suitable for analysis, modelling and deriving the correct information. We have developed a systematic and iterative Bayesian method and sets of rules for the design of enzyme kinetic experiments. Our method selects the optimum design to collect data suitable for accurate modelling and analysis and minimises the error in the parameters estimated. The rules select features of the design such as the substrate range and the number of measurements. We show here that this method can be directly applied to the study of other important kinetic systems, including drug transport, receptor binding, microbial culture and cell transport kinetics. It is possible to reduce the errors in the estimated parameters and, most importantly, increase the efficiency and cost-effectiveness by reducing the necessary amount of experiments and data points measured. (C) 2003 Federation of European Biochemical Societies. Published by Elsevier B.V. All rights reserved.
Resumo:
1. Wildlife managers often require estimates of abundance. Direct methods of estimation are often impractical, especially in closed-forest environments, so indirect methods such as dung or nest surveys are increasingly popular. 2. Dung and nest surveys typically have three elements: surveys to estimate abundance of the dung or nests; experiments to estimate the production (defecation or nest construction) rate; and experiments to estimate the decay or disappearance rate. The last of these is usually the most problematic, and was the subject of this study. 3. The design of experiments to allow robust estimation of mean time to decay was addressed. In most studies to date, dung or nests have been monitored until they disappear. Instead, we advocate that fresh dung or nests are located, with a single follow-up visit to establish whether the dung or nest is still present or has decayed. 4. Logistic regression was used to estimate probability of decay as a function of time, and possibly of other covariates. Mean time to decay was estimated from this function. 5. Synthesis and applications. Effective management of mammal populations usually requires reliable abundance estimates. The difficulty in estimating abundance of mammals in forest environments has increasingly led to the use of indirect survey methods, in which abundance of sign, usually dung (e.g. deer, antelope and elephants) or nests (e.g. apes), is estimated. Given estimated rates of sign production and decay, sign abundance estimates can be converted to estimates of animal abundance. Decay rates typically vary according to season, weather, habitat, diet and many other factors, making reliable estimation of mean time to decay of signs present at the time of the survey problematic. We emphasize the need for retrospective rather than prospective rates, propose a strategy for survey design, and provide analysis methods for estimating retrospective rates.
Resumo:
This paper compares and contrasts, for the first time, one- and two-component gelation systems that are direct structural analogues and draws conclusions about the molecular recognition pathways that underpin fibrillar self-assembly. The new one-component systems comprise L-lysine-based dendritic headgroups covalently connected to an aliphatic diamine spacer chain via an amide bond, One-component gelators with different generations of headgroup (from first to third generation) and different length spacer chains are reported. The self-assembly of these dendrimers in toluene was elucidated using thermal measurements, circular dichroism (CD) and NMR spectroscopies, scanning electron microscopy (SEM), and small-angle X-ray scattering (SAXS). The observations are compared with previous results for the analogous two-component gelation system in which the dendritic headgroups are bound to the aliphatic spacer chain noncovalently via acid-amine interactions. The one-component system is inherently a more effective gelator, partly as a consequence of the additional covalent amide groups that provide a new hydrogen bonding molecular recognition pathway, whereas the two-component analogue relies solely on intermolecular hydrogen bond interactions between the chiral dendritic headgroups. Furthermore, because these amide groups are important in the assembly process for the one-component system, the chiral information preset in the dendritic headgroups is not always transcribed into the nanoscale assembly, whereas for the two-component system, fiber formation is always accompanied by chiral ordering because the molecular recognition pathway is completely dependent on hydrogen bond interactions between well-organized chiral dendritic headgroups.
Resumo:
Why it is easier to cut with even the sharpest knife when 'pressing down and sliding' than when merely 'pressing down alone' is explained. A variety of cases of cutting where the blade and workpiece have different relative motions is analysed and it is shown that the greater the 'slice/push ratio' xi given by ( blade speed parallel to the cutting edge/blade speed perpendicular to the cutting edge), the lower the cutting forces. However, friction limits the reductions attainable at the highest.. The analysis is applied to the geometry of a wheel cutting device (delicatessan slicer) and experiments with a cheddar cheese and a salami using such an instrumented device confirm the general predictions. (C) 2004 Kluwer Academic Publishers.
Resumo:
Similarities between the anatomies of living organisms are often used to draw conclusions regarding the ecology and behaviour of extinct animals. Several pterosaur taxa are postulated to have been skim-feeders based largely on supposed convergences of their jaw anatomy with that of the modern skimming bird, Rynchops spp. Using physical and mathematical models of Rynchops bills and pterosaur jaws, we show that skimming is considerably more energetically costly than previously thought for Rynchops and that pterosaurs weighing more than one kilogram would not have been able to skim at all. Furthermore, anatomical comparisons between the highly specialised skull of Rynchops and those of postulated skimming pterosaurs suggest that even smaller forms were poorly adapted for skim-feeding. Our results refute the hypothesis that some pterosaurs commonly used skimming as a foraging method and illustrate the pitfalls involved in extrapolating from limited morphological convergence.
Resumo:
The phase separation behaviour in aqueous mixtures of poly(methyl vinyl ether) and hydroxypropylcellulose has been studied by cloud points method and viscometric measurements. The miscibility of these blends in solid state has been assessed by infrared spectroscopy; methanol vapours sorption experiments and scanning electron microscopy. The values of Gibbs energy of mixing of the polymers and their blends with methanol as well as between each other were calculated. It was found that in solid state the polymers can interact with methanol very well but the polymer-polymer interactions are unfavourable. Although in aqueous solutions the polymers exhibit some intermolecular interactions their solid blends are not completely miscible. (C) 2005 Elsevier Ltd. All rights reserved.
Resumo:
Most active-contour methods are based either on maximizing the image contrast under the contour or on minimizing the sum of squared distances between contour and image 'features'. The Marginalized Likelihood Ratio (MLR) contour model uses a contrast-based measure of goodness-of-fit for the contour and thus falls into the first class. The point of departure from previous models consists in marginalizing this contrast measure over unmodelled shape variations. The MLR model naturally leads to the EM Contour algorithm, in which pose optimization is carried out by iterated least-squares, as in feature-based contour methods. The difference with respect to other feature-based algorithms is that the EM Contour algorithm minimizes squared distances from Bayes least-squares (marginalized) estimates of contour locations, rather than from 'strongest features' in the neighborhood of the contour. Within the framework of the MLR model, alternatives to the EM algorithm can also be derived: one of these alternatives is the empirical-information method. Tracking experiments demonstrate the robustness of pose estimates given by the MLR model, and support the theoretical expectation that the EM Contour algorithm is more robust than either feature-based methods or the empirical-information method. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Various methods of assessment have been applied to the One Dimensional Time to Explosion (ODTX) apparatus and experiments with the aim of allowing an estimate of the comparative violence of the explosion event to be made. Non-mechanical methods used were a simple visual inspection, measuring the increase in the void volume of the anvils following an explosion and measuring the velocity of the sound produced by the explosion over 1 metre. Mechanical methods used included monitoring piezo-electric devices inserted in the frame of the machine and measuring the rotational velocity of a rotating bar placed on the top of the anvils after it had been displaced by the shock wave. This last method, which resembles original Hopkinson Bar experiments, seemed the easiest to apply and analyse, giving relative rankings of violence and the possibility of the calculation of a “detonation” pressure.
Resumo:
A method and oligonucleotide compound for inhibiting replication of a nidovirus in virus-infected animal cells are disclosed. The compound (i) has a nuclease-resistant backbone, (ii) is capable of uptake by the infected cells, (iii) contains between 8-25 nucleotide bases, and (iv) has a sequence capable of disrupting base pairing between the transcriptional regulatory sequences in the 5′ leader region of the positive-strand viral genome and negative-strand 3′ subgenomic region. In practicing the method, infected cells are exposed to the compound in an amount effective to inhibit viral replication.
Resumo:
This paper evaluates the current status of global modeling of the organic aerosol (OA) in the troposphere and analyzes the differences between models as well as between models and observations. Thirty-one global chemistry transport models (CTMs) and general circulation models (GCMs) have participated in this intercomparison, in the framework of AeroCom phase II. The simulation of OA varies greatly between models in terms of the magnitude of primary emissions, secondary OA (SOA) formation, the number of OA species used (2 to 62), the complexity of OA parameterizations (gas-particle partitioning, chemical aging, multiphase chemistry, aerosol microphysics), and the OA physical, chemical and optical properties. The diversity of the global OA simulation results has increased since earlier AeroCom experiments, mainly due to the increasing complexity of the SOA parameterization in models, and the implementation of new, highly uncertain, OA sources. Diversity of over one order of magnitude exists in the modeled vertical distribution of OA concentrations that deserves a dedicated future study. Furthermore, although the OA / OC ratio depends on OA sources and atmospheric processing, and is important for model evaluation against OA and OC observations, it is resolved only by a few global models. The median global primary OA (POA) source strength is 56 Tg a−1 (range 34–144 Tg a−1) and the median SOA source strength (natural and anthropogenic) is 19 Tg a−1 (range 13–121 Tg a−1). Among the models that take into account the semi-volatile SOA nature, the median source is calculated to be 51 Tg a−1 (range 16–121 Tg a−1), much larger than the median value of the models that calculate SOA in a more simplistic way (19 Tg a−1; range 13–20 Tg a−1, with one model at 37 Tg a−1). The median atmospheric burden of OA is 1.4 Tg (24 models in the range of 0.6–2.0 Tg and 4 between 2.0 and 3.8 Tg), with a median OA lifetime of 5.4 days (range 3.8–9.6 days). In models that reported both OA and sulfate burdens, the median value of the OA/sulfate burden ratio is calculated to be 0.77; 13 models calculate a ratio lower than 1, and 9 models higher than 1. For 26 models that reported OA deposition fluxes, the median wet removal is 70 Tg a−1 (range 28–209 Tg a−1), which is on average 85% of the total OA deposition. Fine aerosol organic carbon (OC) and OA observations from continuous monitoring networks and individual field campaigns have been used for model evaluation. At urban locations, the model–observation comparison indicates missing knowledge on anthropogenic OA sources, both strength and seasonality. The combined model–measurements analysis suggests the existence of increased OA levels during summer due to biogenic SOA formation over large areas of the USA that can be of the same order of magnitude as the POA, even at urban locations, and contribute to the measured urban seasonal pattern. Global models are able to simulate the high secondary character of OA observed in the atmosphere as a result of SOA formation and POA aging, although the amount of OA present in the atmosphere remains largely underestimated, with a mean normalized bias (MNB) equal to −0.62 (−0.51) based on the comparison against OC (OA) urban data of all models at the surface, −0.15 (+0.51) when compared with remote measurements, and −0.30 for marine locations with OC data. The mean temporal correlations across all stations are low when compared with OC (OA) measurements: 0.47 (0.52) for urban stations, 0.39 (0.37) for remote stations, and 0.25 for marine stations with OC data. The combination of high (negative) MNB and higher correlation at urban stations when compared with the low MNB and lower correlation at remote sites suggests that knowledge about the processes that govern aerosol processing, transport and removal, on top of their sources, is important at the remote stations. There is no clear change in model skill with increasing model complexity with regard to OC or OA mass concentration. However, the complexity is needed in models in order to distinguish between anthropogenic and natural OA as needed for climate mitigation, and to calculate the impact of OA on climate accurately.
Resumo:
Therapeutic activation of Toll-like receptors (TLR) has potential for cancer immunotherapy, for augmenting the activity of anti-tumor monoclonal antibodies (mAbs), and for improved vaccine adjuvants. A previous attempt to specifically target TLR agonists to dendritic cells (DC) using mAbs failed because conjugation led to non-specific binding and mAbs lost specificity. We demonstrate here for the first time the successful conjugation of a small molecule TLR7 agonist to an anti-tumour mAb (the anti-hCD 20 rituximab) without compromising antigen specificity. The TLR7 agonist UC-1V150 was conjugated to rituximab using two conjugation methods and yield, molecular substitution ratio, retention of TLR7 activity and specificity of antigen binding were compared. Both conjugation methods produced rituximab-UC-1V150 conjugates with UC-1V150 : rituximab ratio ranging from 1:1 to 3:1 with drug loading quantified by UV spectroscopy and drug substitution ratio verified by MALDI TOF mass spectroscopy. The yield of purified conjugates varied with conjugation method, and dropped as low as 31% using a method previously described for conjugating UC-1V150 to proteins, where a bifunctional crosslinker was firstly reacted with rituximab, and secondly to the TLR7 agonist. We therefore developed a direct conjugation method by producing an amine-reactive UV active version of UC-1V150, termed NHS:UC-1V150. Direct conjugation with NHS:UC-1V150 was quick and simple and gave improved conjugate yields of 65-78%. Rituximab-UC-1V150 conjugates had the expected pro-inflammatory activity in vitro (EC50 28-53 nM) with a significantly increased activity over unconjugated UC-1V150 (EC50 547 nM). Antigen binding and specificity of the rituxuimab-UC-1V150 conjugates was retained, and after incubation with human peripheral blood leukocytes, all conjugates bound strongly only to CD20-expressing B cells whilst no non-specific binding to CD20-negative cells was observed. Selective targeting of Toll-like receptor activation directly within tumors or to DC is now feasible.
Resumo:
As part of an international intercomparison project, a set of single column models (SCMs) and cloud-resolving models (CRMs) are run under the weak temperature gradient (WTG) method and the damped gravity wave (DGW) method. For each model, the implementation of the WTG or DGW method involves a simulated column which is coupled to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. The simulated column has the same surface conditions as the reference state and is initialized with profiles from the reference state. We performed systematic comparison of the behavior of different models under a consistent implementation of the WTG method and the DGW method and systematic comparison of the WTG and DGW methods in models with different physics and numerics. CRMs and SCMs produce a variety of behaviors under both WTG and DGW methods. Some of the models reproduce the reference state while others sustain a large-scale circulation which results in either substantially lower or higher precipitation compared to the value of the reference state. CRMs show a fairly linear relationship between precipitation and circulation strength. SCMs display a wider range of behaviors than CRMs. Some SCMs under the WTG method produce zero precipitation. Within an individual SCM, a DGW simulation and a corresponding WTG simulation can produce different signed circulation. When initialized with a dry troposphere, DGW simulations always result in a precipitating equilibrium state. The greatest sensitivities to the initial moisture conditions occur for multiple stable equilibria in some WTG simulations, corresponding to either a dry equilibrium state when initialized as dry or a precipitating equilibrium state when initialized as moist. Multiple equilibria are seen in more WTG simulations for higher SST. In some models, the existence of multiple equilibria is sensitive to some parameters in the WTG calculations.