892 resultados para estimating conditional probabilities
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Mycobacterium bovis infects the wildlife species badgers Meles meles who are linked with the spread of the associated disease tuberculosis (TB) in cattle. Control of livestock infections depends in part on the spatial and social structure of the wildlife host. Here we describe spatial association of M. bovis infection in a badger population using data from the first year of the Four Area Project in Ireland. Using second-order intensity functions, we show there is strong evidence of clustering of TB cases in each the four areas, i.e. a global tendency for infected cases to occur near other infected cases. Using estimated intensity functions, we identify locations where particular strains of TB cluster. Generalized linear geostatistical models are used to assess the practical range at which spatial correlation occurs and is found to exceed 6 in all areas. The study is of relevance concerning the scale of localized badger culling in the control of the disease in cattle.
Resumo:
Surveys of commercial markets combined with molecular taxonomy (i.e. molecular monitoring) provide a means to detect products from illegal, unregulated and/or unreported (IUU) exploitation, including the sale of fisheries bycatch and wild meat (bushmeat). Capture-recapture analyses of market products using DNA profiling have the potential to estimate the total number of individuals entering the market. However, these analyses are not directly analogous to those of living individuals because a ‘market individual’ does not die suddenly but, instead, remains available for a time in decreasing quantities, rather like the exponential decay of a radioactive isotope. Here we use mitochondrial DNA (mtDNA) sequences and microsatellite genotypes to individually identify products from North Pacific minke whales (Balaenoptera acutorostrata ssp.) purchased in 12 surveys of markets in the Republic of (South) Korea from 1999 to 2003. By applying a novel capture-recapture model with a decay rate parameter to the 205 unique DNA profiles found among 289 products, we estimated that the total number of whales entering trade across the five-year survey period was 827 (SE, 164; CV, 0.20) and that the average ‘half-life’ of products from an individual whale on the market was 1.82 months (SE, 0.24; CV, 0.13). Our estimate of whales in trade (reflecting the true numbers killed) was significantly greater than the officially reported bycatch of 458 whales for this period. This unregulated exploitation has serious implications for the survival of this genetically distinct coastal population. Although our capture-recapture model was developed for specific application to the Korean whale-meat markets, the exponential decay function could be modified to improve the estimates of trade in other wildmeat or fisheries markets or abundance of living populations by noninvasive genotyping.
Resumo:
1. The crabeater seal Lobodon carcinophaga is considered to be a key species in the krill-based food web of the Southern Ocean. Reliable estimates of the abundance of this species are necessary to allow the development of multispecies, predator–prey models as a basis for management of the krill fishery in the Southern Ocean. 2. A survey of crabeater seal abundance was undertaken in 1500 000 km2 of pack-ice off east Antarctica between longitudes 64–150° E during the austral summer of 1999/2000. Sighting surveys, using double observer line transect methods, were conducted from an icebreaker and two helicopters to estimate the density of seals hauled out on the ice in survey strips. Satellite-linked dive recorders were deployed on a sample of seals to estimate the probability of seals being hauled out on the ice at the times of day when sighting surveys were conducted. Model-based inference, involving fitting a density surface, was used to infer densities in the entire survey region from estimates in the surveyed areas. 3. Crabeater seal abundance was estimated to be between 0.7 and 1.4 million animals (with 95% confidence), with the most likely estimate slightly less than 1 million. 4. Synthesis and applications. The estimation of crabeater seal abundance in Convention for the Conservation of Antarctic Marine Living Resources (CCAMLR) management areas off east Antarctic where krill biomass has also been estimated recently provides the data necessary to begin extending from single-species to multispecies management of the krill fishery. Incorporation of all major sources of uncertainty allows a precautionary interpretation of crabeater abundance and demand for krill in keeping with CCAMLR’s precautionary approach to management. While this study focuses on the crabeater seal and management of living resources in the Southern Ocean, it has also led to technical and theoretical developments in survey methodology that have widespread potential application in ecological and resource management studies, and will contribute to a more fundamental understanding of the structure and function of the Southern Ocean ecosystem.
Resumo:
Killer whale (Orcinus orca Linnaeus, 1758) abundance in the North Pacific is known only for a few populations for which extensive longitudinal data are available, with little quantitative data from more remote regions. Line-transect ship surveys were conducted in July and August of 2001–2003 in coastal waters of the western Gulf of Alaska and the Aleutian Islands. Conventional and Multiple Covariate Distance Sampling methods were used to estimate the abundance of different killer whale ecotypes, which were distinguished based upon morphological and genetic data. Abundance was calculated separately for two data sets that differed in the method by which killer whale group size data were obtained. Initial group size (IGS) data corresponded to estimates of group size at the time of first sighting, and post-encounter group size (PEGS) corresponded to estimates made after closely approaching sighted groups.
Resumo:
1. Distance sampling is a widely used technique for estimating the size or density of biological populations. Many distance sampling designs and most analyses use the software Distance. 2. We briefly review distance sampling and its assumptions, outline the history, structure and capabilities of Distance, and provide hints on its use. 3. Good survey design is a crucial prerequisite for obtaining reliable results. Distance has a survey design engine, with a built-in geographic information system, that allows properties of different proposed designs to be examined via simulation, and survey plans to be generated. 4. A first step in analysis of distance sampling data is modeling the probability of detection. Distance contains three increasingly sophisticated analysis engines for this: conventional distance sampling, which models detection probability as a function of distance from the transect and assumes all objects at zero distance are detected; multiple-covariate distance sampling, which allows covariates in addition to distance; and mark–recapture distance sampling, which relaxes the assumption of certain detection at zero distance. 5. All three engines allow estimation of density or abundance, stratified if required, with associated measures of precision calculated either analytically or via the bootstrap. 6. Advanced analysis topics covered include the use of multipliers to allow analysis of indirect surveys (such as dung or nest surveys), the density surface modeling analysis engine for spatial and habitat-modeling, and information about accessing the analysis engines directly from other software. 7. Synthesis and applications. Distance sampling is a key method for producing abundance and density estimates in challenging field conditions. The theory underlying the methods continues to expand to cope with realistic estimation situations. In step with theoretical developments, state-of- the-art software that implements these methods is described that makes the methods accessible to practicing ecologists.
Resumo:
Double-observer line transect methods are becoming increasingly widespread, especially for the estimation of marine mammal abundance from aerial and shipboard surveys when detection of animals on the line is uncertain. The resulting data supplement conventional distance sampling data with two-sample mark–recapture data. Like conventional mark–recapture data, these have inherent problems for estimating abundance in the presence of heterogeneity. Unlike conventional mark–recapture methods, line transect methods use knowledge of the distribution of a covariate, which affects detection probability (namely, distance from the transect line) in inference. This knowledge can be used to diagnose unmodeled heterogeneity in the mark–recapture component of the data. By modeling the covariance in detection probabilities with distance, we show how the estimation problem can be formulated in terms of different levels of independence. At one extreme, full independence is assumed, as in the Petersen estimator (which does not use distance data); at the other extreme, independence only occurs in the limit as detection probability tends to one. Between the two extremes, there is a range of models, including those currently in common use, which have intermediate levels of independence. We show how this framework can be used to provide more reliable analysis of double-observer line transect data. We test the methods by simulation, and by analysis of a dataset for which true abundance is known. We illustrate the approach through analysis of minke whale sightings data from the North Sea and adjacent waters.
Resumo:
Past research has demonstrated emergent conditional relations using a go/no-go procedure with pairs of figures displayed side-by-side on a computer screen. The present Study sought to extend applications Of this procedure. In Experiment, 1, we evaluated whether emergent conditional relations Could be demonstrated when two-component stimuli were displayed in figure-ground relationships-abstract figures displayed on backgrounds of different colors. Five normal)), capable adults participated. During training, each two-component stimulus Was presented successively. Responses emitted in the presence of some Stimulus pairs (A1B1, A2B2, A3B3, B1C1, B2C2 and B3C3) were reinforced, whereas responses emitted in the presence of other pairs (A1B2, A1B3, A2B1, A2B3, A3B1, A3B2, B1C2, B1C3, B2C1, B2C3, B3C1 and B3C2) were not. During tests, new configurations (AC and CA) were presented, thus emulating structurally the matching-to-sample tests employed in typical equivalence Studies. All participants showed emergent relations consistent with stimulus equivalence during testing. In Experiment 2, we systematically replicated the procedures with Stimulus compounds consisting Of four figures (A1, A2, C1 and C2) and two locations (left - B1 and right - 132). A,11 6 normally capable adults exhibited emergent stimulus-stimulus relations. Together, these experiments show that the go/no-go procedure is a potentially useful alternative for Studying emergent. conditional relations when matching-to-sample is procedurally cumbersome or impossible to use.
Resumo:
In epidemiology, the basic reproduction number R-0 is usually defined as the average number of new infections caused by a single infective individual introduced into a completely susceptible population. According to this definition. R-0 is related to the initial stage of the spreading of a contagious disease. However, from epidemiological models based on ordinary differential equations (ODE), R-0 is commonly derived from a linear stability analysis and interpreted as a bifurcation parameter: typically, when R-0 >1, the contagious disease tends to persist in the population because the endemic stationary solution is asymptotically stable: when R-0 <1, the corresponding pathogen tends to naturally disappear because the disease-free stationary solution is asymptotically stable. Here we intend to answer the following question: Do these two different approaches for calculating R-0 give the same numerical values? In other words, is the number of secondary infections caused by a unique sick individual equal to the threshold obtained from stability analysis of steady states of ODE? For finding the answer, we use a susceptibleinfective-recovered (SIR) model described in terms of ODE and also in terms of a probabilistic cellular automaton (PCA), where each individual (corresponding to a cell of the PCA lattice) is connected to others by a random network favoring local contacts. The values of R-0 obtained from both approaches are compared, showing good agreement. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
We examine the impact of Brazil's Bolsa Escola/Familia program on Brazilian children's education outcomes. Bolsa provides cash payments to poor households if their children (ages 6 to 15) are enrolled in school. Using school census data to compare changes in enrollment, dropping out and grade promotion across schools that adopted Bolsa at different times, we estimate that the program has: increased enrollment by about 5.5% (6.5%) in grades 1-4 (grades 5-8); lowered dropout rates by 0.5 (0.4) percentage points in grades 1-4 (grades 5-8); and raised grade promotion rates by 0.9 (0.3) percentage points in grades 1-4 (grades 5-8). About one third of Brazil's children participate in Bolsa, so assuming no spillover effects onto non-participants implies that Bolsa's impacts are three times higher than these estimates. However, simple calculations using enrollment impacts suggest that Bolsa's benefits in terms of increased wages may not exceed its costs. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
In this paper we use Markov chain Monte Carlo (MCMC) methods in order to estimate and compare GARCH models from a Bayesian perspective. We allow for possibly heavy tailed and asymmetric distributions in the error term. We use a general method proposed in the literature to introduce skewness into a continuous unimodal and symmetric distribution. For each model we compute an approximation to the marginal likelihood, based on the MCMC output. From these approximations we compute Bayes factors and posterior model probabilities. (C) 2012 IMACS. Published by Elsevier B.V. All rights reserved.
Resumo:
This paper studies the average control problem of discrete-time Markov Decision Processes (MDPs for short) with general state space, Feller transition probabilities, and possibly non-compact control constraint sets A(x). Two hypotheses are considered: either the cost function c is strictly unbounded or the multifunctions A(r)(x) = {a is an element of A(x) : c(x, a) <= r} are upper-semicontinuous and compact-valued for each real r. For these two cases we provide new results for the existence of a solution to the average-cost optimality equality and inequality using the vanishing discount approach. We also study the convergence of the policy iteration approach under these conditions. It should be pointed out that we do not make any assumptions regarding the convergence and the continuity of the limit function generated by the sequence of relative difference of the alpha-discounted value functions and the Poisson equations as often encountered in the literature. (C) 2012 Elsevier Inc. All rights reserved.