975 resultados para data capture


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Purpose The purpose of this paper was to review the effectiveness of telephone interviewing for capturing data and to consider in particular the challenges faced by telephone interviewers when capturing information about market segments. Design/methodology/approach The platform for this methodological critique was a market segment analysis commissioned by Sport Wales which involved a series of 85 telephone interviews completed during 2010. Two focus groups involving the six interviewers involved in the study were convened to reflect on the researchers’ experiences and the implications for business and management research. Findings There are three principal sets of findings. First, although telephone interviewing is generally a cost-effective data collection method, it is important to consider both the actual costs (i.e. time spent planning and conducting interviews) as well as the opportunity costs (i.e. missed appointments, “chasing participants”). Second, researchers need to be sensitised to and sensitive to the demographic characteristics of telephone interviewees (insofar as these are knowable) because responses are influenced by them. Third, the anonymity of telephone interviews may be more conducive for discussing sensitive issues than face-to-face interactions. Originality/value The present study adds to this modest body of literature on the implementation of telephone interviewing as a research technique of business and management. It provides valuable methodological background detail about the intricate, personal experiences of researchers undertaking this method “at a distance” and without visual cues, and makes explicit the challenges of telephone interviewing for the purposes of data capture.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Se han desarrollado varios modelos prometedores para la captura digital de datos de movilidad, que pueden ser aplicados en la planificación urbana, de transporte y de ordenamiento territorial. Por ello el objetivo de este trabajo es desarrollar una metodología que recolecte información de movilidad con la cual se generen matrices Origen-Destino (OD) y de tiempos de viajes, además que identifique puntos de interés, modos y rutas frecuentes de viaje mediante el desarrollo e implementación de una aplicación para dispositivos móviles Android. Metodología: Se produjo una aplicación para dispositivos móviles con sistema operativo Android, en base a modelos existentes. Esta aplicación obtuvo datos de movilidad a partir de los sensores de localización incorporados en los móviles (GPS), para su posterior migración a una base de datos en la nube y consiguiente post proceso con herramientas de análisis como KNIME, Python y QuantumGis. La aplicación fue probada por 68 estudiantes voluntarios de la Universidad de Cuenca, durante 14 días del mes de enero de 2016. Resultados: Con la información completa de 44 participantes se obtuvieron matrices OD y de tiempos de viajes para diferentes períodos del día, las cuales permitieron identificar variaciones de interacción entre zonas, variaciones de número y tiempo de viajes. Fueron reconocidos también modos de transporte como caminata, bicicleta y motorizados para una sub muestra (n=6). Se detectaron los POIs Residencia (91%), Trabajo/Estudio (74%) y puntos intermedios (20% del total de POIs) y se logró observar comportamientos de movilidad atípico. Finalmente se compararon las rutas más frecuentadas por los usuarios con las rutas óptimas teóricas calculadas, encontrando que el 63.6% de los usuarios coincidían con el recorrido de estas últimas. Conclusiones: El método planteado presenta coherencia con trabajos previos, mostrando niveles de confianza equiparables. El mayor reto es la implementación masiva del modelo creado para la recolección de datos útiles para planes de movilidad.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Matrix population models, elasticity analysis and loop analysis can potentially provide powerful techniques for the analysis of life histories. Data from a capture-recapture study on a population of southern highland water skinks (Eulamprus tympanum) were used to construct a matrix population model. Errors in elasticities were calculated by using the parametric bootstrap technique. Elasticity and loop analyses were then conducted to identify the life history stages most important to fitness. The same techniques were used to investigate the relative importance of fast versus slow growth, and rapid versus delayed reproduction. Mature water skinks were long-lived, but there was high immature mortality. The most sensitive life history stage was the subadult stage. It is suggested that life history evolution in E. tympanum may be strongly affected by predation, particularly by birds. Because our population declined over the study, slow growth and delayed reproduction were the optimal life history strategies over this period. Although the techniques of evolutionary demography provide a powerful approach for the analysis of life histories, there are formidable logistical obstacles in gathering enough high-quality data for robust estimates of the critical parameters.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

OBJECTIVE: As part of the WHO ICD-11 development initiative, the Topic Advisory Group on Quality and Safety explores meta-features of morbidity data sets, such as the optimal number of secondary diagnosis fields. DESIGN: The Health Care Quality Indicators Project of the Organization for Economic Co-Operation and Development collected Patient Safety Indicator (PSI) information from administrative hospital data of 19-20 countries in 2009 and 2011. We investigated whether three countries that expanded their data systems to include more secondary diagnosis fields showed increased PSI rates compared with six countries that did not. Furthermore, administrative hospital data from six of these countries and two American states, California (2011) and Florida (2010), were analysed for distributions of coded patient safety events across diagnosis fields. RESULTS: Among the participating countries, increasing the number of diagnosis fields was not associated with any overall increase in PSI rates. However, high proportions of PSI-related diagnoses appeared beyond the sixth secondary diagnosis field. The distribution of three PSI-related ICD codes was similar in California and Florida: 89-90% of central venous catheter infections and 97-99% of retained foreign bodies and accidental punctures or lacerations were captured within 15 secondary diagnosis fields. CONCLUSIONS: Six to nine secondary diagnosis fields are inadequate for comparing complication rates using hospital administrative data; at least 15 (and perhaps more with ICD-11) are recommended to fully characterize clinical outcomes. Increasing the number of fields should improve the international and intra-national comparability of data for epidemiologic and health services research, utilization analyses and quality of care assessment.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The contribution investigates the problem of estimating the size of a population, also known as the missing cases problem. Suppose a registration system is targeting to identify all cases having a certain characteristic such as a specific disease (cancer, heart disease, ...), disease related condition (HIV, heroin use, ...) or a specific behavior (driving a car without license). Every case in such a registration system has a certain notification history in that it might have been identified several times (at least once) which can be understood as a particular capture-recapture situation. Typically, cases are left out which have never been listed at any occasion, and it is this frequency one wants to estimate. In this paper modelling is concentrating on the counting distribution, e.g. the distribution of the variable that counts how often a given case has been identified by the registration system. Besides very simple models like the binomial or Poisson distribution, finite (nonparametric) mixtures of these are considered providing rather flexible modelling tools. Estimation is done using maximum likelihood by means of the EM algorithm. A case study on heroin users in Bangkok in the year 2001 is completing the contribution.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Population size estimation with discrete or nonparametric mixture models is considered, and reliable ways of construction of the nonparametric mixture model estimator are reviewed and set into perspective. Construction of the maximum likelihood estimator of the mixing distribution is done for any number of components up to the global nonparametric maximum likelihood bound using the EM algorithm. In addition, the estimators of Chao and Zelterman are considered with some generalisations of Zelterman’s estimator. All computations are done with CAMCR, a special software developed for population size estimation with mixture models. Several examples and data sets are discussed and the estimators illustrated. Problems using the mixture model-based estimators are highlighted.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper, we apply one-list capture-recapture models to estimate the number of scrapie-affected holdings in Great Britain. We applied this technique to the Compulsory Scrapie Flocks Scheme dataset where cases from all the surveillance sources monitoring the presence of scrapie in Great Britain, the abattoir survey, the fallen stock survey and the statutory reporting of clinical cases, are gathered. Consequently, the estimates of prevalence obtained from this scheme should be comprehensive and cover all the different presentations of the disease captured individually by the surveillance sources. Two estimators were applied under the one-list approach: the Zelterman estimator and Chao's lower bound estimator. Our results could only inform with confidence the scrapie-affected holding population with clinical disease; this moved around the figure of 350 holdings in Great Britain for the period under study, April 2005-April 2006. Our models allowed the stratification by surveillance source and the input of covariate information, holding size and country of origin. None of the covariates appear to inform the model significantly. Crown Copyright (C) 2008 Published by Elsevier B.V. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

[EN] The information provided by the International Commission for the Conservation of Atlantic Tunas (ICCAT) on captures of skipjack tuna (Katsuwonus pelamis) in the central-east Atlantic has a number of limitations, such as gaps in the statistics for certain fleets and the level of spatiotemporal detail at which catches are reported. As a result, the quality of these data and their effectiveness for providing management advice is limited. In order to reconstruct missing spatiotemporal data of catches, the present study uses Data INterpolating Empirical Orthogonal Functions (DINEOF), a technique for missing data reconstruction, applied here for the first time to fisheries data. DINEOF is based on an Empirical Orthogonal Functions decomposition performed with a Lanczos method. DINEOF was tested with different amounts of missing data, intentionally removing values from 3.4% to 95.2% of data loss, and then compared with the same data set with no missing data. These validation analyses show that DINEOF is a reliable methodological approach of data reconstruction for the purposes of fishery management advice, even when the amount of missing data is very high.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Monte Carlo simulation was used to evaluate properties of a simple Bayesian MCMC analysis of the random effects model for single group Cormack-Jolly-Seber capture-recapture data. The MCMC method is applied to the model via a logit link, so parameters p, S are on a logit scale, where logit(S) is assumed to have, and is generated from, a normal distribution with mean μ and variance σ2 . Marginal prior distributions on logit(p) and μ were independent normal with mean zero and standard deviation 1.75 for logit(p) and 100 for μ ; hence minimally informative. Marginal prior distribution on σ2 was placed on τ2=1/σ2 as a gamma distribution with α=β=0.001 . The study design has 432 points spread over 5 factors: occasions (t) , new releases per occasion (u), p, μ , and σ . At each design point 100 independent trials were completed (hence 43,200 trials in total), each with sample size n=10,000 from the parameter posterior distribution. At 128 of these design points comparisons are made to previously reported results from a method of moments procedure. We looked at properties of point and interval inference on μ , and σ based on the posterior mean, median, and mode and equal-tailed 95% credibility interval. Bayesian inference did very well for the parameter μ , but under the conditions used here, MCMC inference performance for σ was mixed: poor for sparse data (i.e., only 7 occasions) or σ=0 , but good when there were sufficient data and not small σ .

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The Data Envelopment Analysis (DEA) efficiency score obtained for an individual firm is a point estimate without any confidence interval around it. In recent years, researchers have resorted to bootstrapping in order to generate empirical distributions of efficiency scores. This procedure assumes that all firms have the same probability of getting an efficiency score from any specified interval within the [0,1] range. We propose a bootstrap procedure that empirically generates the conditional distribution of efficiency for each individual firm given systematic factors that influence its efficiency. Instead of resampling directly from the pooled DEA scores, we first regress these scores on a set of explanatory variables not included at the DEA stage and bootstrap the residuals from this regression. These pseudo-efficiency scores incorporate the systematic effects of unit-specific factors along with the contribution of the randomly drawn residual. Data from the U.S. airline industry are utilized in an empirical application.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY WITH PRIOR ARRANGEMENT

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Information on marine and estuarine capture fishery activity in northern Todos os Santos Bay, northeastern Brazil, based on daily data collected between September 2003 and June 2005 is presented. Small-scale artisanal fishery in this area includes the use of traditional vessels both nonmotorized and motorized for locomotion, being carried out mainly by canoe or on foot, and involves many different kinds of gear, including gillnet, hook and line, seine nets, and traps. A total of 113 taxa were grouped into 77 resources, including 88 fish, 10 crustaceans, and 15 mollusks. Data on nominal catches of fish, crustaceans and mollusks are presented by month and location. A total of 345.2 tonnes of fishery resources were produced (285.4 tonnes of fish, 39.2 tonnes of fresh invertebrates, and 20.6 tonnes of processed invertebrates). Temporal variation in the fish catch was associated with the life cycle of the species or with the hydrographic conditions. The first-sale value of this catch amounted to around US$ 615,000.00, fishes representing 71.3% of it. A table of the average price of each fishery resource is presented. The results produced in this study may be considered a reference for future monitoring programs of fishery resources in the area.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Barium stars are optimal sites for studying the correlations between the neutron-capture elements and other species that may be depleted or enhanced, because they act as neutron seeds or poisons during the operation of the s-process. These data are necessary to help constrain the modeling of the neutron-capture paths and explain the s-process abundance curve of the solar system. Chemical abundances for a large number of barium stars with different degrees of s-process excesses, masses, metallicities, and evolutionary states are a crucial step towards this goal. We present abundances of Mn, Cu, Zn, and various light and heavy elements for a sample of barium and normal giant stars, and present correlations between abundances contributed to different degrees by the weak-s, mains, and r-processes of neutron capture, between Fe-peak elements and heavy elements. Data from the literature are also considered in order to better study the abundance pattern of peculiar stars. The stellar spectra were observed with FEROS/ESO. The stellar atmospheric parameters of the eight barium giant stars and six normal giants that we analyzed lie in the range 4300 < T(eff)/K < 5300, -0.7 < [Fe/H] <= 0.12 and 1.5 <= log g < 2.9. Carbon and nitrogen abundances were derived by spectral synthesis of the molecular bands of C(2), CH, and CN. For all other elements we used the atomic lines to perform the spectral synthesis. A very large scatter was found mainly for the Mn abundances when data from the literature were considered. We found that [Zn/Fe] correlates well with the heavy element excesses, its abundance clearly increasing as the heavy element excesses increase, a trend not shown by the [Cu/Fe] and [Mn/Fe] ratios. Also, the ratios involving Mn, Cu, and Zn and heavy elements usually show an increasing trend toward higher metallicities. Our results suggest that a larger fraction of the Zn synthesis than of Cu is owed to massive stars, and that the contribution of the main-s process to the synthesis of both elements is small. We also conclude that Mn is mostly synthesized by SN Ia, and that a non-negligible fraction of the synthesis of Mn, Cu, and Zn is owed to the weak s-process.