893 resultados para likelihood to publication
Resumo:
Wednesday 23rd April 2014 Speaker(s): Willi Hasselbring Organiser: Leslie Carr Time: 23/04/2014 11:00-11:50 Location: B32/3077 File size: 669 Mb Abstract For good scientific practice, it is important that research results may be properly checked by reviewers and possibly repeated and extended by other researchers. This is of particular interest for "digital science" i.e. for in-silico experiments. In this talk, I'll discuss some issues of how software systems and services may contribute to good scientific practice. Particularly, I'll present our PubFlow approach to automate publication workflows for scientific data. The PubFlow workflow management system is based on established technology. We integrate institutional repository systems (based on EPrints) and world data centers (in marine science). PubFlow collects provenance data automatically via our monitoring framework Kieker. Provenance information describes the origins and the history of scientific data in its life cycle, and the process by which it arrived. Thus, provenance information is highly relevant to repeatability and trustworthiness of scientific results. In our evaluation in marine science, we collaborate with the GEOMAR Helmholtz Centre for Ocean Research Kiel.
Resumo:
Objetivo: Recientemente, se han propuesto varios dispositivos de impedancia bioeléctrica (BIA) para la estimación rápida de la grasa corporal. Sin embargo, no han sido publicadas referencias de grasa corporal para niños y adolescentes en población Colombiana. El objetivo de este estudio fue establecer percentiles de grasa corporal por BIA en niños y adolescentes de Bogotá, Colombia de entre 9 y 17.9 años, pertenecientes al estudio FUPRECOL. Métodos: Estudio descriptivo y transversal, realizado en 2.526 niños y 3.324 adolescentes de entre 9 y 17.9 años de edad, pertenecientes a instituciones educativas oficiales de Bogotá, Colombia. El porcentaje de grasa corporal fue medido con Tanita® Analizador de Composición Corporal (Modelo BF-689), según edad y sexo. Se tomaron medidas de peso, talla, circunferencia de cintura, circunferencia de cadera y estado de maduración sexual por auto-reporte. Se calcularon los percentiles (P3, P10, P25, P50, P75, P90 y P97) y curvas centiles por el método LMS según sexo y edad y se realizó una comparación entre los valores de la CC observados con estándares internacionales. Resultados: Se presentan valores de porcentaje de grasa corporal y las curvas de percentiles. En la mayoría de los grupos etáreos la grasa corporal de las chicas fue mayor a la de los chicos. Sujetos cuyo porcentaje de grasa corporal estaba por encima del percentil 90 de la distribución estándar normal se consideró que tenían un elevado riesgo cardiovascular (chicos desde 23,4-28,3 y chicas desde 31,0-34,1). En general, nuestros porcentajes de grasa corporal fueron inferiores a los valores de Turquía, Alemania, Grecia, España y Reino Unido. Conclusiones: Se presentan percentiles del porcentaje de grasa por BIA según edad y sexo que podrán ser usados de referencia en la evaluación del estado nutricional y en la predicción del riesgo cardiovascular desde edades tempranas.
Resumo:
Objetivo: Determinar la distribución por percentiles de la circunferencia de cintura en una población escolar de Bogotá, Colombia, pertenecientes al estudio FUPRECOL. Métodos: Estudio transversal, realizado en 3.005 niños y 2.916 adolescentes de entre 9 y 17,9 años de edad, de Bogotá, Colombia. Se tomaron medidas de peso, talla, circunferencia de cintura, circunferencia de cadera y estado de maduración sexual por auto-reporte. Se calcularon los percentiles (P3, P10, P25, P50, P75, P90 y P97) y curvas centiles según sexo y edad. Se realizó una comparación entre los valores de la circunferencia de cintura observados con estándares internacionales. Resultados: De la población general (n=5.921), el 57,0% eran chicas (promedio de edad 12,7±2,3 años). En la mayoría de los grupos etáreos la circunferencia de cintura de las chicas fue inferior a la de los chicos. El aumento entre el P50-P97 de la circunferencia de cintura , por edad, fue mínimo de 15,7 cm en chicos de 9-9.9 años y de 16,0 cm en las chicas de 11-11.9 años. Al comparar los resultados de este estudio, por grupos de edad y sexo, con trabajos internacionales de niños y adolescentes, el P50 fue inferior al reportado en Perú e Inglaterra a excepción de los trabajos de la India, Venezuela (Mérida), Estados Unidos y España. Conclusiones: Se presentan percentiles de la circunferencia de cintura según edad y sexo que podrán ser usados de referencia en la evaluación del estado nutricional y en la predicción del riesgo cardiovascular desde edades tempranas.
Resumo:
The objectives of the study were (a) to examine which information and design elements on dairy product packages operate as cues in consumer evaluations of product healthfulness, and (b) to measure the degree to which consumers voluntarily attend to these elements during product choice. Visual attention was measured by means of eye-tracking. Task (free viewing, product healthfulness evaluation, and purchase likelihood evaluation) and product (five different yoghurt products) were varied in a mixed within-between subjects design. The free viewing condition served as a baseline against which increases or decreases in attention during product healthfulness evaluation and purchase likelihood evaluation were assessed. The analysis revealed that the only element operating as a health cue during product healthfulness evaluation was the nutrition label. The information cues used during purchase likelihood evaluation were the name of the product category and the nutrition label. Taken together, the results suggest that the only information element that consumers consistently utilize as a health cue is the nutrition label and that only a limited amount of attention is devoted to read nutrition labels during purchase likelihood evaluations. The study also revealed that the probability that a consumer will read the nutrition label during the purchase decision process is associated with gender, body mass index and health motivation.
Resumo:
This paper discusses proposals for common euro area sovereign securities. Such instruments can potentially serve two functions: in the short-term, stabilize financialmarkets and banks and, in the medium-term, help improve the euro area economic governance framework through enhanced fiscal discipline and risk-sharing. Many questions remain onwhether financial instruments can ever accomplish such goals without bold institutional and political decisions, and,whether, in the absence of such decisions, they can create new distortions. The proposals discussed are also not necessarily competing substitutes; rather, they can be complements to be sequenced along alternative paths that possibly culminate in a fully-fledged Eurobond. The specific path chosen by policymakers should allow for learning and secure the necessary evolution of institutional infrastructures and political safeguards.
Resumo:
Large firms contribute disproportionately to the economic performance of countries: they are more productive, pay higher wages, enjoy higher profits and are more successful in international markets. The differences between European countries in terms of the size of their firms are stark. Firms in Italy and Spain, for example, are on average 40 percent smaller than firms in Germany. The low average firm size translates into a chronic lack of large firms. In Italy and Spain, a mere 5 percent of manufacturing firms have more than 250 employees, compared to a much higher 11 percent in Germany. Understanding the roots of these differences is key to improving the economic performance of Europe’s lagging economies. So why is there so much variation in firm size in different European countries? What are the barriers that keep firms in some countries from growing? And which policies are likely to be most effective in breaking down those barriers? This policy report aims to answer these questions by developing a quantitative model of the seven European countries covered by the EFIGE survey (Austria, France, Germany, Hungary, Italy, Spain and the UK). The EFIGE survey asked 14,444 firms in those countries about their performance, their modes of internationalisation, their staffing decisions, their financing structure, and their competitive environment, among other topics.
Resumo:
Populations on the periphery of a species' range may experience more severe environmental conditions relative to populations closer to the core of the range. As a consequence, peripheral populations may have lower reproductive success or survival, which may affect their persistence. In this study, we examined the influence of environmental conditions on breeding biology and nest survival in a threatened population of Loggerhead Shrikes (Lanius ludovicianus) at the northern limit of the range in southeastern Alberta, Canada, and compared our estimates with those from shrike populations elsewhere in the range. Over the 2-year study in 1992–1993, clutch sizes averaged 6.4 eggs, and most nests were initiated between mid-May and mid-June. Rate of renesting following initial nest failure was 19%, and there were no known cases of double-brooding. Compared with southern populations, rate of renesting was lower and clutch sizes tended to be larger, whereas the length of the nestling and hatchling periods appeared to be similar. Most nest failures were directly associated with nest predators, but weather had a greater direct effect in 1993. Nest survival models indicated higher daily nest survival during warmer temperatures and lower precipitation, which may include direct effects of weather on nestlings as well as indirect effects on predator behavior or food abundance. Daily nest survival varied over the nesting cycle in a curvilinear pattern, with a slight increase through laying, approximately constant survival through incubation, and a decline through the nestling period. Partial brood loss during the nestling stage was high, particularly in 1993, when conditions were cool and wet. Overall, the lower likelihood of renesting, lower nest survival, and higher partial brood loss appeared to depress reproductive output in this population relative to those elsewhere in the range, and may have increased susceptibility to population declines.
Big Decisions and Sparse Data: Adapting Scientific Publishing to the Needs of Practical Conservation
Resumo:
The biggest challenge in conservation biology is breaking down the gap between research and practical management. A major obstacle is the fact that many researchers are unwilling to tackle projects likely to produce sparse or messy data because the results would be difficult to publish in refereed journals. The obvious solution to sparse data is to build up results from multiple studies. Consequently, we suggest that there needs to be greater emphasis in conservation biology on publishing papers that can be built on by subsequent research rather than on papers that produce clear results individually. This building approach requires: (1) a stronger theoretical framework, in which researchers attempt to anticipate models that will be relevant in future studies and incorporate expected differences among studies into those models; (2) use of modern methods for model selection and multi-model inference, and publication of parameter estimates under a range of plausible models; (3) explicit incorporation of prior information into each case study; and (4) planning management treatments in an adaptive framework that considers treatments applied in other studies. We encourage journals to publish papers that promote this building approach rather than expecting papers to conform to traditional standards of rigor as stand-alone papers, and believe that this shift in publishing philosophy would better encourage researchers to tackle the most urgent conservation problems.
Resumo:
The automatic tracking technique used by Thorncroft and Hodges (2001) has been used to identify coherent vorticity structures at 850hPa over West Africa and the tropical Atlantic in the ECMWF 40-year reanalysis. The presence of two dominant source regions, north and south of 15ºN over West Africa, for storm tracks over the Atlantic was confirmed. Results show that the southern storm track provides most of the storms that reach the main development region where most tropical cyclones develop. There exists marked seasonal variability in location and intensity of the storms leaving the West African coast, which may influence the likelihood of downstream intensification and longevity. There exists considerable year-to-year variability in the number of West African storm tracks, both in numbers over the land and continuing out over the tropical Atlantic Ocean. While the low-frequency variability is well correlated with Atlantic tropical cyclone activity, West African rainfall and SSTs, the interannual variability is found to be uncorrelated with these. In contrast, variance of the 2-6-day-filtered meridional wind, which provides a synoptic-scale measure of African Easterly Wave activity, shows a significant, positive correlation with tropical cyclone activity at interannual timescales.
Resumo:
The variogram is essential for local estimation and mapping of any variable by kriging. The variogram itself must usually be estimated from sample data. The sampling density is a compromise between precision and cost, but it must be sufficiently dense to encompass the principal spatial sources of variance. A nested, multi-stage, sampling with separating distances increasing in geometric progression from stage to stage will do that. The data may then be analyzed by a hierarchical analysis of variance to estimate the components of variance for every stage, and hence lag. By accumulating the components starting from the shortest lag one obtains a rough variogram for modest effort. For balanced designs the analysis of variance is optimal; for unbalanced ones, however, these estimators are not necessarily the best, and the analysis by residual maximum likelihood (REML) will usually be preferable. The paper summarizes the underlying theory and illustrates its application with data from three surveys, one in which the design had four stages and was balanced and two implemented with unbalanced designs to economize when there were more stages. A Fortran program is available for the analysis of variance, and code for the REML analysis is listed in the paper. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
It has been generally accepted that the method of moments (MoM) variogram, which has been widely applied in soil science, requires about 100 sites at an appropriate interval apart to describe the variation adequately. This sample size is often larger than can be afforded for soil surveys of agricultural fields or contaminated sites. Furthermore, it might be a much larger sample size than is needed where the scale of variation is large. A possible alternative in such situations is the residual maximum likelihood (REML) variogram because fewer data appear to be required. The REML method is parametric and is considered reliable where there is trend in the data because it is based on generalized increments that filter trend out and only the covariance parameters are estimated. Previous research has suggested that fewer data are needed to compute a reliable variogram using a maximum likelihood approach such as REML, however, the results can vary according to the nature of the spatial variation. There remain issues to examine: how many fewer data can be used, how should the sampling sites be distributed over the site of interest, and how do different degrees of spatial variation affect the data requirements? The soil of four field sites of different size, physiography, parent material and soil type was sampled intensively, and MoM and REML variograms were calculated for clay content. The data were then sub-sampled to give different sample sizes and distributions of sites and the variograms were computed again. The model parameters for the sets of variograms for each site were used for cross-validation. Predictions based on REML variograms were generally more accurate than those from MoM variograms with fewer than 100 sampling sites. A sample size of around 50 sites at an appropriate distance apart, possibly determined from variograms of ancillary data, appears adequate to compute REML variograms for kriging soil properties for precision agriculture and contaminated sites. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
An unbalanced nested sampling design was used to investigate the spatial scale of soil and herbicide interactions at the field scale. A hierarchical analysis of variance based on residual maximum likelihood (REML) was used to analyse the data and provide a first estimate of the variogram. Soil samples were taken at 108 locations at a range of separating distances in a 9 ha field to explore small and medium scale spatial variation. Soil organic matter content, pH, particle size distribution, microbial biomass and the degradation and sorption of the herbicide, isoproturon, were determined for each soil sample. A large proportion of the spatial variation in isoproturon degradation and sorption occurred at sampling intervals less than 60 m, however, the sampling design did not resolve the variation present at scales greater than this. A sampling interval of 20-25 m should ensure that the main spatial structures are identified for isoproturon degradation rate and sorption without too great a loss of information in this field.
Resumo:
The variogram is essential for local estimation and mapping of any variable by kriging. The variogram itself must usually be estimated from sample data. The sampling density is a compromise between precision and cost, but it must be sufficiently dense to encompass the principal spatial sources of variance. A nested, multi-stage, sampling with separating distances increasing in geometric progression from stage to stage will do that. The data may then be analyzed by a hierarchical analysis of variance to estimate the components of variance for every stage, and hence lag. By accumulating the components starting from the shortest lag one obtains a rough variogram for modest effort. For balanced designs the analysis of variance is optimal; for unbalanced ones, however, these estimators are not necessarily the best, and the analysis by residual maximum likelihood (REML) will usually be preferable. The paper summarizes the underlying theory and illustrates its application with data from three surveys, one in which the design had four stages and was balanced and two implemented with unbalanced designs to economize when there were more stages. A Fortran program is available for the analysis of variance, and code for the REML analysis is listed in the paper. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
Previous research has shown that people's evaluations of explanations about medication and their intention to comply with the prescription are detrimentally affected by the inclusion of information about adverse side effects of the medication. The present study (Experiment 1) examined which particular aspects of information about side effects (their number, likelihood of occurrence, or severity) are likely to have the greatest effect on people's satisfaction, perception of risk, and intention to comply, as well as how the information about side effects interacts with information about the severity of the illness for which the medication was prescribed. Across all measures, it was found that manipulations of side effect severity had the greatest impact on people's judgements, followed by manipulations of side effect likelihood and then number. Experiments 2 and 3 examined how the severity of the diagnosed illness and information about negative side effects interact with two other factors suggested by Social Cognition models of health behaviour to affect people's intention to comply: namely, perceived benefit of taking the prescribed drug, and the perceived level of control over preventing or alleviating the side effects. It was found that providing people with a statement about the positive benefit of taking the medication had relatively little effect on judgements, whereas informing them about how to reduce the chances of experiencing the side effects had an overall beneficial effect on ratings.
Resumo:
In this paper we consider the estimation of population size from onesource capture–recapture data, that is, a list in which individuals can potentially be found repeatedly and where the question is how many individuals are missed by the list. As a typical example, we provide data from a drug user study in Bangkok from 2001 where the list consists of drug users who repeatedly contact treatment institutions. Drug users with 1, 2, 3, . . . contacts occur, but drug users with zero contacts are not present, requiring the size of this group to be estimated. Statistically, these data can be considered as stemming from a zero-truncated count distribution.We revisit an estimator for the population size suggested by Zelterman that is known to be robust under potential unobserved heterogeneity. We demonstrate that the Zelterman estimator can be viewed as a maximum likelihood estimator for a locally truncated Poisson likelihood which is equivalent to a binomial likelihood. This result allows the extension of the Zelterman estimator by means of logistic regression to include observed heterogeneity in the form of covariates. We also review an estimator proposed by Chao and explain why we are not able to obtain similar results for this estimator. The Zelterman estimator is applied in two case studies, the first a drug user study from Bangkok, the second an illegal immigrant study in the Netherlands. Our results suggest the new estimator should be used, in particular, if substantial unobserved heterogeneity is present.