922 resultados para mixed-method evaluation
Resumo:
The time-of-detection method for aural avian point counts is a new method of estimating abundance, allowing for uncertain probability of detection. The method has been specifically designed to allow for variation in singing rates of birds. It involves dividing the time interval of the point count into several subintervals and recording the detection history of the subintervals when each bird sings. The method can be viewed as generating data equivalent to closed capture–recapture information. The method is different from the distance and multiple-observer methods in that it is not required that all the birds sing during the point count. As this method is new and there is some concern as to how well individual birds can be followed, we carried out a field test of the method using simulated known populations of singing birds, using a laptop computer to send signals to audio stations distributed around a point. The system mimics actual aural avian point counts, but also allows us to know the size and spatial distribution of the populations we are sampling. Fifty 8-min point counts (broken into four 2-min intervals) using eight species of birds were simulated. Singing rate of an individual bird of a species was simulated following a Markovian process (singing bouts followed by periods of silence), which we felt was more realistic than a truly random process. The main emphasis of our paper is to compare results from species singing at (high and low) homogenous rates per interval with those singing at (high and low) heterogeneous rates. Population size was estimated accurately for the species simulated, with a high homogeneous probability of singing. Populations of simulated species with lower but homogeneous singing probabilities were somewhat underestimated. Populations of species simulated with heterogeneous singing probabilities were substantially underestimated. Underestimation was caused by both the very low detection probabilities of all distant individuals and by individuals with low singing rates also having very low detection probabilities.
Resumo:
Using the Met Office large-eddy model (LEM) we simulate a mixed-phase altocumulus cloud that was observed from Chilbolton in southern England by a 94 GHz Doppler radar, a 905 nm lidar, a dual-wavelength microwave radiometer and also by four radiosondes. It is important to test and evaluate such simulations with observations, since there are significant differences between results from different cloud-resolving models for ice clouds. Simulating the Doppler radar and lidar data within the LEM allows us to compare observed and modelled quantities directly, and allows us to explore the relationships between observed and unobserved variables. For general-circulation models, which currently tend to give poor representations of mixed-phase clouds, the case shows the importance of using: (i) separate prognostic ice and liquid water, (ii) a vertical resolution that captures the thin layers of liquid water, and (iii) an accurate representation the subgrid vertical velocities that allow liquid water to form. It is shown that large-scale ascents and descents are significant for this case, and so the horizontally averaged LEM profiles are relaxed towards observed profiles to account for these. The LEM simulation then gives a reasonable. cloud, with an ice-water path approximately two thirds of that observed, with liquid water at the cloud top, as observed. However, the liquid-water cells that form in the updraughts at cloud top in the LEM have liquid-water paths (LWPs) up to half those observed, and there are too few cells, giving a mean LWP five to ten times smaller than observed. In reality, ice nucleation and fallout may deplete ice-nuclei concentrations at the cloud top, allowing more liquid water to form there, but this process is not represented in the model. Decreasing the heterogeneous nucleation rate in the LEM increased the LWP, which supports this hypothesis. The LEM captures the increase in the standard deviation in Doppler velocities (and so vertical winds) with height, but values are 1.5 to 4 times smaller than observed (although values are larger in an unforced model run, this only increases the modelled LWP by a factor of approximately two). The LEM data show that, for values larger than approximately 12 cm s(-1), the standard deviation in Doppler velocities provides an almost unbiased estimate of the standard deviation in vertical winds, but provides an overestimate for smaller values. Time-smoothing the observed Doppler velocities and modelled mass-squared-weighted fallspeeds shows that observed fallspeeds are approximately two-thirds of the modelled values. Decreasing the modelled fallspeeds to those observed increases the modelled IWC, giving an IWP 1.6 times that observed.
Resumo:
Recent observations from the Argo dataset of temperature and salinity profiles are used to evaluate a series of 3-year data assimilation experiments in a global ice–ocean general circulation model. The experiments are designed to evaluate a new data assimilation system whereby salinity is assimilated along isotherms, S(T ). In addition, the role of a balancing salinity increment to maintain water mass properties is investigated. This balancing increment is found to effectively prevent spurious mixing in tropical regions induced by univariate temperature assimilation, allowing the correction of isotherm geometries without adversely influencing temperature–salinity relationships. In addition, the balancing increment is able to correct a fresh bias associated with a weak subtropical gyre in the North Atlantic using only temperature observations. The S(T ) assimilation method is found to provide an important improvement over conventional depth level assimilation, with lower root-mean-squared forecast errors over the upper 500 m in the tropical Atlantic and Pacific Oceans. An additional set of experiments is performed whereby Argo data are withheld and used for independent evaluation. The most significant improvements from Argo assimilation are found in less well-observed regions (Indian, South Atlantic and South Pacific Oceans). When Argo salinity data are assimilated in addition to temperature, improvements to modelled temperature fields are obtained due to corrections to model density gradients and the resulting circulation. It is found that observations from the Argo array provide an invaluable tool for both correcting modelled water mass properties through data assimilation and for evaluating the assimilation methods themselves.
Resumo:
Recently, various approaches have been suggested for dose escalation studies based on observations of both undesirable events and evidence of therapeutic benefit. This article concerns a Bayesian approach to dose escalation that requires the user to make numerous design decisions relating to the number of doses to make available, the choice of the prior distribution, the imposition of safety constraints and stopping rules, and the criteria by which the design is to be optimized. Results are presented of a substantial simulation study conducted to investigate the influence of some of these factors on the safety and the accuracy of the procedure with a view toward providing general guidance for investigators conducting such studies. The Bayesian procedures evaluated use logistic regression to model the two responses, which are both assumed to be binary. The simulation study is based on features of a recently completed study of a compound with potential benefit to patients suffering from inflammatory diseases of the lung.
Resumo:
Similarities between the anatomies of living organisms are often used to draw conclusions regarding the ecology and behaviour of extinct animals. Several pterosaur taxa are postulated to have been skim-feeders based largely on supposed convergences of their jaw anatomy with that of the modern skimming bird, Rynchops spp. Using physical and mathematical models of Rynchops bills and pterosaur jaws, we show that skimming is considerably more energetically costly than previously thought for Rynchops and that pterosaurs weighing more than one kilogram would not have been able to skim at all. Furthermore, anatomical comparisons between the highly specialised skull of Rynchops and those of postulated skimming pterosaurs suggest that even smaller forms were poorly adapted for skim-feeding. Our results refute the hypothesis that some pterosaurs commonly used skimming as a foraging method and illustrate the pitfalls involved in extrapolating from limited morphological convergence.
Resumo:
Similarities between the anatomies of living organisms are often used to draw conclusions regarding the ecology and behaviour of extinct animals. Several pterosaur taxa are postulated to have been skim-feeders based largely on supposed convergences of their jaw anatomy with that of the modern skimming bird, Rynchops spp. Using physical and mathematical models of Rynchops bills and pterosaur jaws, we show that skimming is considerably more energetically costly than previously thought for Rynchops and that pterosaurs weighing more than one kilogram would not have been able to skim at all. Furthermore, anatomical comparisons between the highly specialised skull of Rynchops and those of postulated skimming pterosaurs suggest that even smaller forms were poorly adapted for skim-feeding. Our results refute the hypothesis that some pterosaurs commonly used skimming as a foraging method and illustrate the pitfalls involved in extrapolating from limited morphological convergence.
Resumo:
Previous studies have reported that cheese curd syneresis kinetics can be monitored by dilution of chemical tracers, such as Blue Dextran, in whey. The objective of this study was to evaluate an improved tracer method to monitor whey volumes expelled over time during syneresis. Two experiments with different ranges of milk fat (0-5% and 2.3-3.5%) were carried out in an 11 L double-O laboratory scale cheese vat. Tracer was added to the curd-whey mixture during the cutting phase of cheese making and samples were taken at 10 min intervals up to 75 min after cutting. The volume of whey expelled was measured gravimetrically and the dilution of tracer in the whey was measured by absorbance at 620 nm. The volumes of whey expelled were significantly reduced at higher milk fat levels. Whey yield was predicted with a SEP ranging from 3.2 to 6.3 g whey/100 mL of milk and a CV ranging from 2.03 to 2.7% at different milk fat levels.
Resumo:
Integrated Arable Farming Systems (IAFS), which involve a reduction in the use of off-farm inputs, are attracting considerable research interest in the UK. The objectives of these systems experiments are to compare their financial performance with that from conventional or current farming practices. To date, this comparison has taken little account of any environmental benefits (or disbenefits) of the two systems. The objective of this paper is to review the assessment methodologies available for the analysis of environmental impacts. To illustrate the results of this exercise, the methodology and environmental indicators chosen are then applied to data from one of the LINK - Integrated Farming Systems experimental sites. Data from the Pathhead site in Southern Scotland are used to evaluate the use of invertebrates and nitrate loss as environmental indicators within IAFS. The results suggest that between 1992 and 1995 the biomass of earthworms fell by 28 kg per hectare on the integrated rotation and rose by 31 kg per hectare on the conventional system. This led to environmental costs ranging between £2.24 and £13.44 per hectare for the integrated system and gains of between £2.48 and £14.88 for the conventional system. In terms of nitrate, the integrated system had an estimated loss of £72.21 per hectare in comparison to £149.40 per hectare on the conventional system. Conclusions are drawn about the advantages and disadvantages of this type of analytical framework. Keywords: Farming systems; IAFS; Environmental valuation; Economics; Earthworms; Nitrates; Soil fauna
Resumo:
A new technique for objective classification of boundary layers is applied to ground-based vertically pointing Doppler lidar and sonic anemometer data. The observed boundary layer has been classified into nine different types based on those in the Met Office ‘Lock’ scheme, using vertical velocity variance and skewness, along with attenuated backscatter coefficient and surface sensible heat flux. This new probabilistic method has been applied to three years of data from Chilbolton Observatory in southern England and a climatology of boundary-layer type has been created. A clear diurnal cycle is present in all seasons. The most common boundary-layer type is stable with no cloud (30.0% of the dataset). The most common unstable type is well mixed with no cloud (15.4%). Decoupled stratocumulus is the third most common boundary-layer type (10.3%) and cumulus under stratocumulus occurs 1.0% of the time. The occurrence of stable boundary-layer types is much higher in the winter than the summer and boundary-layer types capped with cumulus cloud are more prevalent in the warm seasons. The most common diurnal evolution of boundary-layer types, occurring on 52 days of our three-year dataset, is that of no cloud with the stability changing from stable to unstable during daylight hours. These results are based on 16393 hours, 62.4% of the three-year dataset, of diagnosed boundary-layer type. This new method is ideally suited to long-term evaluation of boundary-layer type parametrisations in weather forecast and climate models.
Resumo:
Metal cation toxicity to basidiomycete fungi is poorly understood, despite its well-known importance in terrestrial ecosystems. Moreover, there is no reported methodology for the routine evaluation of metal toxicity to basidiomycetes. In the present study, we describe the development of a procedure to assess the acute toxicity of metal cations (Na(+), K(+), Li(+), Ca(2+), Mg(2+), Co(2+), Zn(2+), Ni(2+), Mn(2+), Cd(2+), and Cu(2+)) to the bioluminescent basidiomycete fungus Gerronema viridilucens. The method is based on the decrease in the intensity of bioluminescence resulting from injuries sustained by the fungus mycelium exposed to either essential or nonessential metal toxicants. The assay described herein enables LIS to propose a metal toxicity series to Gerronenia viridilucens based on data obtained from the bioluminescence intensity (median effective concentration [EC50] values) versus metal concentration: Cd(2+) > Cu(2+) > Mn(2+) approximate to Ni(2+) approximate to Co(2+) > Zn(2+) > Mg(2+) > Li(+) > K(+) approximate to Na(+) > Ca(2+), and to shed some li-ht on the mechanism of toxic action of metal cations to basidiomycete fungi. Environ. Toxicol. Chem. 2010;29:320-326. (C) 2009 SETAC
Resumo:
A method for the determination of pesticide residues in water and sediment was developed using the QuEChERS method followed by gas chromatography - mass spectrometry. The method was validated in terms of accuracy, specificity, linearity, detection and quantification limits. The recovery percentages obtained for the pesticides in water at different concentrations ranged from 63 to 116%, with relative standard deviations below 12%. The corresponding results from the sediment ranged from 48 to 115% with relative standard deviations below 16%. The limits of detection for the pesticides in water and sediment were below 0.003 mg L(-1) and 0.02 mg kg(-1), respectively.
Resumo:
If a plastic material is used as a print bearer there are a need of a special surface treatment to get agod and durable printing. The most used surface treatment technique for the moment is coronatreatment. This kind of treatment has unfortunately showed not to be so durable in the long term.Plasma treatment which in this case uses different kind of gases in the treatment of polypropyleneis shown as a more effective treatment in this project. When the plasma treated surface has beenprinted is the good quality last much longer and the adhesion between the ink and the surface isremained. To test this adhesion is for the moment a standard used (ASTM D3359). This standardhas appeared unstable and dependent at many different factors, which gives a big variation in thetest results. Because of this has new test methods been carried out to give a more even and morereliable result in the test of the adhesion.
Resumo:
BACKGROUND: Annually, 2.8 million neonatal deaths occur worldwide, despite the fact that three-quarters of them could be prevented if available evidence-based interventions were used. Facilitation of community groups has been recognized as a promising method to translate knowledge into practice. In northern Vietnam, the Neonatal Health - Knowledge Into Practice trial evaluated facilitation of community groups (2008-2011) and succeeded in reducing the neonatal mortality rate (adjusted odds ratio, 0.51; 95 % confidence interval 0.30-0.89). The aim of this paper is to report on the process (implementation and mechanism of impact) of this intervention. METHODS: Process data were excerpted from diary information from meetings with facilitators and intervention groups, and from supervisor records of monthly meetings with facilitators. Data were analyzed using descriptive statistics. An evaluation including attributes and skills of facilitators (e.g., group management, communication, and commitment) was performed at the end of the intervention using a six-item instrument. Odds ratios were analyzed, adjusted for cluster randomization using general linear mixed models. RESULTS: To ensure eight active facilitators over 3 years, 11 Women's Union representatives were recruited and trained. Of the 44 intervention groups, composed of health staff and commune stakeholders, 43 completed their activities until the end of the study. In total, 95 % (n = 1508) of the intended monthly meetings with an intervention group and a facilitator were conducted. The overall attendance of intervention group members was 86 %. The groups identified 32 unique problems and implemented 39 unique actions. The identified problems targeted health issues concerning both women and neonates. Actions implemented were mainly communication activities. Communes supported by a group with a facilitator who was rated high on attributes and skills (n = 27) had lower odds of neonatal mortality (odds ratio, 0.37; 95 % confidence interval, 0.19-0.73) than control communes (n = 46). CONCLUSIONS: This evaluation identified several factors that might have influenced the outcomes of the trial: continuity of intervention groups' work, adequate attributes and skills of facilitators, and targeting problems along a continuum of care. Such factors are important to consider in scaling-up efforts.
Resumo:
The objective of this study was to evaluate the use of probit and logit link functions for the genetic evaluation of early pregnancy using simulated data. The following simulation/analysis structures were constructed: logit/logit, logit/probit, probit/logit, and probit/probit. The percentages of precocious females were 5, 10, 15, 20, 25 and 30% and were adjusted based on a change in the mean of the latent variable. The parametric heritability (h²) was 0.40. Simulation and genetic evaluation were implemented in the R software. Heritability estimates (ĥ²) were compared with h² using the mean squared error. Pearson correlations between predicted and true breeding values and the percentage of coincidence between true and predicted ranking, considering the 10% of bulls with the highest breeding values (TOP10) were calculated. The mean ĥ² values were under- and overestimated for all percentages of precocious females when logit/probit and probit/logit models used. In addition, the mean squared errors of these models were high when compared with those obtained with the probit/probit and logit/logit models. Considering ĥ², probit/probit and logit/logit were also superior to logit/probit and probit/logit, providing values close to the parametric heritability. Logit/probit and probit/logit presented low Pearson correlations, whereas the correlations obtained with probit/probit and logit/logit ranged from moderate to high. With respect to the TOP10 bulls, logit/probit and probit/logit presented much lower percentages than probit/probit and logit/logit. The genetic parameter estimates and predictions of breeding values of the animals obtained with the logit/logit and probit/probit models were similar. In contrast, the results obtained with probit/logit and logit/probit were not satisfactory. There is need to compare the estimation and prediction ability of logit and probit link functions.