881 resultados para field methods


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Environmental quality indicators provide resource managers with information useful to assess coastal condition and scientifically defensible decisions. Since 1984, the National Oceanic and Atmospheric Administration (NOAA), through its National Status and Trends (NS&T) Program, has provided environmental monitoring data on chemical, physical, and biological indicators of coastal environments. The program has two major monitoring components to meet its goals. The Bioeffects Assessments Program evaluates the health of bays, estuaries, and the coastal zone around the nation using the Sediment Quality Triad technique that includes measuring sediment contaminant concentrations, sediment toxicity and benthic community structure. The Mussel Watch Program is responsible for temporal coastal monitoring of contaminant concentrations by quantifying chemicals in bivalve mollusks. The NS&T Program is committed to providing the highest quality data to meet its statutory and scientific responsibilities. Data, metadata and information products are managed within the guidance protocols and standards set forth by NOAA’s Integrated Ocean Observing System (IOOS) and the National Monitoring Network, as recommended by the 2004 Ocean Action Plan. Thus, to meet these data requirements, quality assurance protocols have been an integral part of the NS&T Program since its inception. Documentation of sampling and analytical methods is an essential part of quality assurance practices. A step-by–step summary of the Bioeffects Program’s field standard operation procedures (SOP) are presented in this manual.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A total of 54 free-ranging monkeys were captured and marked in Santa Rosa National Park, Costa Rica, during May 1985, and an additional 17 were captured during March 1986. The animals were darted using a blowpipe or a CO2 gun. The drugs used were Ketaset, Sernylan and Telazol. Ketaset was effective for Cebus capucinus but unsuccessful for Alouatta palliata and Ateles geoffroyi. Sernylan was successful for A. geoffroyi and A. palliata but is no longer commercially available. Telazol proved to be an excellent alternative capture drug for both A. palliata and A. geoffroyi.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Large samples of multiplex pedigrees will probably be needed to detect susceptibility loci for schizophrenia by linkage analysis. Standardized ascertainment of such pedigrees from culturally and ethnically homogeneous populations may improve the probability of detection and replication of linkage. The Irish Study of High-Density Schizophrenia Families (ISHDSF) was formed from standardized ascertainment of multiplex schizophrenia families in 39 psychiatric facilities covering over 90% of the population in Ireland and Northern Ireland. We here describe a phenotypic sample and a subset thereof, the linkage sample. Individuals were included in the phenotypic sample if adequate diagnostic information, based on personal interview and/or hospital record, was available. Only individuals with available DNA were included in the linkage sample. Inclusion of a pedigree into the phenotypic sample required at least two first, second, or third degree relatives with non-affective psychosis (NAP), one whom had schizophrenia (S) or poor-outcome schizo-affective disorder (PO-SAD). Entry into the linkage sample required DNA samples on at least two individuals with NAP, of whom at least one had S or PO-SAD. Affection was defined by narrow, intermediate, and broad criteria. The phenotypic sample contained 277 pedigrees and 1,770 individuals and the linkage sample 265 pedigrees and 1,408 individuals. Using the intermediate definition of affection, the phenotypic sample contained 837 affected individuals and 526 affected sibling pairs. Parallel figures for the linkage sample were 700 and 420. Individuals with schizophrenia from these multiplex pedigrees resembled epidemiologically sampled cases with respect to age at onset, gender distribution, and most clinical symptoms, although they were more thought-disordered and had a poorer outcome. Power analyses based on the model of linkage heterogeneity indicated that the ISHDSF should be able to detect a major locus that influences susceptibility to schizophrenia in as few as 20% of families. Compared to first-degree relatives of epidemiologically sampled schizophrenic probands, first-degree relatives of schizophrenic members from the ISHDSF had a similar risk for schizotypal personality disorder, affective illness, alcoholism, and anxiety disorder. With sufficient resources, large-scale ascertainment of multiplex schizophrenia pedigrees is feasible, especially in countries with catchmented psychiatric care and stable populations. Although somewhat more severely ill, schizophrenic members of such pedigrees appear to clinically resemble typical schizophrenic patients. Our ascertainment process for multiplex schizophrenia families did not select for excess familial risk for affective illness or alcoholism. With its large sample ascertained in a standardized manner from a relatively homogeneous population, the ISHDSF provides considerable power to detect susceptibility loci for schizophrenia.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Birds are vulnerable to collisions with human-made fixed structures. Despite ongoing development and increases in infrastructure, we have few estimates of the magnitude of collision mortality. We reviewed the existing literature on avian mortality associated with transmission lines and derived an initial estimate for Canada. Estimating mortality from collisions with power lines is challenging due to the lack of studies, especially from sites within Canada, and due to uncertainty about the magnitude of detection biases. Detection of bird collisions with transmission lines varies due to habitat type, species size, and scavenging rates. In addition, birds can be crippled by the impact and subsequently die, although crippling rates are poorly known and rarely incorporated into estimates. We used existing data to derive a range of estimates of avian mortality associated with collisions with transmission lines in Canada by incorporating detection, scavenging, and crippling biases. There are 231,966 km of transmission lines across Canada, mostly in the boreal forest. Mortality estimates ranged from 1 million to 229.5 million birds per year, depending on the bias corrections applied. We consider our most realistic estimate, taking into account variation in risk across Canada, to range from 2.5 million to 25.6 million birds killed per year. Data from multiple studies across Canada and the northern U.S. indicate that the most vulnerable bird groups are (1) waterfowl, (2) grebes, (3) shorebirds, and (4) cranes, which is consistent with other studies. Populations of several groups that are vulnerable to collisions are increasing across Canada (e.g., waterfowl, raptors), which suggests that collision mortality, at current levels, is not limiting population growth. However, there may be impacts on other declining species, such as shorebirds and some species at risk, including Alberta’s Trumpeter Swans (Cygnus buccinator) and western Canada’s endangered Whooping Cranes (Grus americana). Collisions may be more common during migration, which underscores the need to understand impacts across the annual cycle. We emphasize that these estimates are preliminary, especially considering the absence of Canadian studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A major problem in modern probabilistic modeling is the huge computational complexity involved in typical calculations with multivariate probability distributions when the number of random variables is large. Because exact computations are infeasible in such cases and Monte Carlo sampling techniques may reach their limits, there is a need for methods that allow for efficient approximate computations. One of the simplest approximations is based on the mean field method, which has a long history in statistical physics. The method is widely used, particularly in the growing field of graphical models. Researchers from disciplines such as statistical physics, computer science, and mathematical statistics are studying ways to improve this and related methods and are exploring novel application areas. Leading approaches include the variational approach, which goes beyond factorizable distributions to achieve systematic improvements; the TAP (Thouless-Anderson-Palmer) approach, which incorporates correlations by including effective reaction terms in the mean field theory; and the more general methods of graphical models. Bringing together ideas and techniques from these diverse disciplines, this book covers the theoretical foundations of advanced mean field methods, explores the relation between the different approaches, examines the quality of the approximation obtained, and demonstrates their application to various areas of probabilistic modeling.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We discuss the Application of TAP mean field methods known from Statistical Mechanics of disordered systems to Bayesian classification with Gaussian processes. In contrast to previous applications, no knowledge about the distribution of inputs is needed. Simulation results for the Sonar data set are given.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of the present study was to investigate percentage body fat (%BF) differences in three Spanish dance disciplines and to compare skinfold and bioelectrical impedance predictions of body fat percentage in the same sample. Seventy-six female dancers, divided into three groups, Classical (n=23), Spanish (n=29) and Flamenco (n=24), were measured using skinfold measurements at four sites: triceps, subscapular, biceps and iliac crest, and whole body multi-frequency bioelectrical impedance (BIA). The skin-fold measures were used to predict body fat percentage via Durnin and Womersley's and Segal, Sun and Yannakoulia equations by BIA. Differences in percent fat mass between groups (Classical, Spanish and Flamenco) were tested by using repeated measures analysis (ANOVA). Also, Pearson's product-moment correlations were performed on the body fat percentage values obtained using both methods. In addition, Bland-Altman plots were used to assess agreement, between anthropometric and BIA methods. Repeated measures analysis of variance did not found differences in %BF between modalities (p<0.05). Fat percentage correlations ranged from r= 0.57 to r=0.97 (all, p<0.001). Bland-Altman analysis revealed differences between BIA Yannakoulia as a reference method with BIA Segal (-0.35 ± 2.32%, 95%CI: -0.89to 0.18, p=0.38), with BIA Sun (-0.73 ± 2.3%, 95%CI: -1.27 to -0.20, p=0.014) and Durnin-Womersley (-2.65 ± 2,48%, 95%CI: -3.22 to -2.07, p<0.0001). It was concluded that body fat percentage estimates by BIA compared with skinfold method were systematically different in young adult female ballet dancers, having a tendency to produce underestimations as %BF increased with Segal and Durnin-Womersley equations compared to Yannakoulia, concluding that these methods are not interchangeable.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We develop a method for performing one-loop calculations in finite systems that is based on using the WKB approximation for the high energy states. This approximation allows us to absorb all the counterterms analytically and thereby avoids the need for extreme numerical precision that was required by previous methods. In addition, the local approximation makes this method well suited for self-consistent calculations. We then discuss the application of relativistic mean field methods to the atomic nucleus. Self-consistent, one loop calculations in the Walecka model are performed and the role of the vacuum in this model is analyzed. This model predicts that vacuum polarization effects are responsible for up to five percent of the local nucleon density. Within this framework the possible role of strangeness degrees of freedom is studied. We find that strangeness polarization can increase the kaon-nucleus scattering cross section by ten percent. By introducing a cutoff into the model, the dependence of the model on short-distance physics, where its validity is doubtful, is calculated. The model is very sensitive to cutoffs around one GeV.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Introduction The consistency of measuring small field output factors is greatly increased by reporting the measured dosimetric field size of each factor, as opposed to simply stating the nominal field size [1] and therefore requires the measurement of cross-axis profiles in a water tank. However, this makes output factor measurements time consuming. This project establishes at which field size the accuracy of output factors are not affected by the use of potentially inaccurate nominal field sizes, which we believe establishes a practical working definition of a ‘small’ field. The physical components of the radiation beam that contribute to the rapid change in output factor at small field sizes are examined in detail. The physical interaction that dominates the cause of the rapid dose reduction is quantified, and leads to the establishment of a theoretical definition of a ‘small’ field. Methods Current recommendations suggest that radiation collimation systems and isocentre defining lasers should both be calibrated to permit a maximum positioning uncertainty of 1 mm [2]. The proposed practical definition for small field sizes is as follows: if the output factor changes by ±1.0 % given a change in either field size or detector position of up to ±1 mm then the field should be considered small. Monte Carlo modelling was used to simulate output factors of a 6 MV photon beam for square fields with side lengths from 4.0 to 20.0 mm in 1.0 mm increments. The dose was scored to a 0.5 mm wide and 2.0 mm deep cylindrical volume of water within a cubic water phantom, at a depth of 5 cm and SSD of 95 cm. The maximum difference due to a collimator error of ±1 mm was found by comparing the output factors of adjacent field sizes. The output factor simulations were repeated 1 mm off-axis to quantify the effect of detector misalignment. Further simulations separated the total output factor into collimator scatter factor and phantom scatter factor. The collimator scatter factor was further separated into primary source occlusion effects and ‘traditional’ effects (a combination of flattening filter and jaw scatter etc.). The phantom scatter was separated in photon scatter and electronic disequilibrium. Each of these factors was plotted as a function of field size in order to quantify how each affected the change in small field size. Results The use of our practical definition resulted in field sizes of 15 mm or less being characterised as ‘small’. The change in field size had a greater effect than that of detector misalignment. For field sizes of 12 mm or less, electronic disequilibrium was found to cause the largest change in dose to the central axis (d = 5 cm). Source occlusion also caused a large change in output factor for field sizes less than 8 mm. Discussion and conclusions The measurement of cross-axis profiles are only required for output factor measurements for field sizes of 15 mm or less (for a 6 MV beam on Varian iX linear accelerator). This is expected to be dependent on linear accelerator spot size and photon energy. While some electronic disequilibrium was shown to occur at field sizes as large as 30 mm (the ‘traditional’ definition of small field [3]), it has been shown that it does not cause a greater change than photon scatter until a field size of 12 mm, at which point it becomes by far the most dominant effect.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Purpose To explore the effect of small-aperture optics, designed to aid presbyopes by increasing ocular depth-of-focus, on measurements of the visual field. Methods Simple theoretical and ray-tracing models were used to predict the impact of different designs of small-aperture contact lenses or corneal inlays on the proportion of light passing through natural pupils of various diameters as a function of the direction in the visual field. The left eyes of five healthy volunteers were tested using three afocal, hand-painted opaque soft contact lenses (www.davidthomas.com). Two were opaque over a 10 mm diameter but had central clear circular apertures of 1.5 and 3.0 mm in diameter. The third had an annular opaque zone with inner and outer diameters of 1.5 and 4.0 mm, approximately simulating the geometry of the KAMRA inlay (www.acufocus.com). A fourth, clear lens was used for comparison purposes. Visual fields along the horizontal meridian were evaluated up to 50° eccentricity with static automated perimetry (Medmont M700, stimulus Goldmann-size III; www.medmont.com). Results According to ray-tracing, the two lenses with the circular apertures were expected to reduce the relative transmittance of the pupil to zero at specific field angles (around 60° for the conditions of the experimental measurements). In contrast, the annular stop had no effect on the absolute field but relative transmittance was reduced over the central area of the field, the exact effects depending upon the natural pupil diameter. Experimental results broadly agreed with these theoretical expectations. With the 1.5 and 3.0 mm pupils, only minor losses in sensitivity (around 2 dB) in comparison with the clear-lens case occurred across the central 10° radius of field. Beyond this angle, sensitivity losses increased, to reach about 7 dB at the edge of the measured field (50°). The field results with the annular stop showed at most only a slight loss in sensitivity (≤3 dB) across the measured field. Conclusion The present theoretical and experimental results support earlier clinical findings that KAMRA-type annular stops, unlike circular artificial pupils, have only minor effects on measurements of the visual field.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The in vivo faecal egg count reduction test (FECRT) is the most commonly used test to detect anthelmintic resistance (AR) in gastrointestinal nematodes (GIN) of ruminants in pasture based systems. However, there are several variations on the method, some more appropriate than others in specific circumstances. While in some cases labour and time can be saved by just collecting post-drench faecal worm egg counts (FEC) of treatment groups with controls, or pre- and post-drench FEC of a treatment group with no controls, there are circumstances when pre- and post-drench FEC of an untreated control group as well as from the treatment groups are necessary. Computer simulation techniques were used to determine the most appropriate of several methods for calculating AR when there is continuing larval development during the testing period, as often occurs when anthelmintic treatments against genera of GIN with high biotic potential or high re-infection rates, such as Haemonchus contortus of sheep and Cooperia punctata of cattle, are less than 100% efficacious. Three field FECRT experimental designs were investigated: (I) post-drench FEC of treatment and controls groups, (II) pre- and post-drench FEC of a treatment group only and (III) pre- and post-drench FEC of treatment and control groups. To investigate the performance of methods of indicating AR for each of these designs, simulated animal FEC were generated from negative binominal distributions with subsequent sampling from the binomial distributions to account for drench effect, with varying parameters for worm burden, larval development and drench resistance. Calculations of percent reductions and confidence limits were based on those of the Standing Committee for Agriculture (SCA) guidelines. For the two field methods with pre-drench FEC, confidence limits were also determined from cumulative inverse Beta distributions of FEC, for eggs per gram (epg) and the number of eggs counted at detection levels of 50 and 25. Two rules for determining AR: (1) %reduction (%R) < 95% and lower confidence limit <90%; and (2) upper confidence limit <95%, were also assessed. For each combination of worm burden, larval development and drench resistance parameters, 1000 simulations were run to determine the number of times the theoretical percent reduction fell within the estimated confidence limits and the number of times resistance would have been declared. When continuing larval development occurs during the testing period of the FECRT, the simulations showed AR should be calculated from pre- and post-drench worm egg counts of an untreated control group as well as from the treatment group. If the widely used resistance rule 1 is used to assess resistance, rule 2 should also be applied, especially when %R is in the range 90 to 95% and resistance is suspected.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

We present a survey on different numerical interpolation schemes used for two-phase transient heat conduction problems in the context of interface capturing phase-field methods. Examples are general transport problems in the context of diffuse interface methods with a non-equal heat conductivity in normal and tangential directions to the interface. We extend the tonsorial approach recently published by Nicoli M et al (2011 Phys. Rev. E 84 1-6) to the general three-dimensional (3D) transient evolution equations. Validations for one-dimensional, two-dimensional and 3D transient test cases are provided, and the results are in good agreement with analytical and numerical reference solutions.